A New Challenge to Einstein?

By Sean Carroll | October 12, 2009 7:58 am

General relativity, Einstein’s theory of gravity and spacetime, has been pretty successful over the years. It’s passed numerous tests in the Solar System, scored a Nobel-worthy victory with the binary pulsar, and gets the right answer even when extrapolated back to the first one second after the Big Bang. But no scientific theory is sacred. Even though GR is both aesthetically compelling and an unquestioned empirical success, it’s our job as scientists to keep probing it in different ways. Especially when it comes to astrophysics, where we need dark matter and dark energy to explain what we see, it makes sense to put Einstein to the most stringent tests we can devise.

So here is a new such test, courtesy of Rachel Bean of Cornell. She combines a suite of cosmological data, especially measurements of weak gravitational lensing from the Hubble Space Telescope, to see whether GR correctly describes the behavior of large-scale structure in the universe. And the surprising thing is — it doesn’t. At the 98% confidence level, Rachel finds that general relativity is inconsistent with the data. I’m not sure why we haven’t been reading about this in the science media or even on other blogs — it’s certainly a newsworthy result. Admittedly, the smart money is still that there is some tricky thing that hasn’t yet been noticed and Einstein will eventually come through the victor, but this is serious work by a respected cosmologist. Either the result is wrong, and we should be working hard to find out why, or it’s right, and we’re on the cusp of a revolution.

Here is the abstract:

A weak lensing detection of a deviation from General Relativity on cosmic scales
Authors: Rachel Bean

Abstract: We consider evidence for deviations from General Relativity (GR) in the growth of large scale structure, using two parameters, γ and η, to quantify the modification. We consider the Integrated Sachs-Wolfe effect (ISW) in the WMAP Cosmic Microwave Background data, the cross-correlation between the ISW and galaxy distributions from 2MASS and SDSS surveys, and the weak lensing shear field from the Hubble Space Telescope’s COSMOS survey along with measurements of the cosmic expansion history. We find current data, driven by the COSMOS weak lensing measurements, disfavors GR on cosmic scales, preferring η < 1 at 1 < z < 2 at the 98% significance level.

Let’s see if we can’t unpack the basic idea. The real problem in testing GR in cosmology is that any particular kind of spacetime curvature can be a solution to Einstein’s theory — all you need are the right sources of matter and energy. So in order to do a real test, you need to have some confidence that you understand what is creating the gravitational field — in the Solar System it’s the Sun and planets, in the binary pulsar it’s two neutron stars, and in the early universe it’s radiation. For large-scale structure things are a bit less clear — there’s ordinary matter, and dark matter, and of course dark energy.

Nevertheless, even though there are some things we don’t know about dark matter and dark energy, there are some things we think we do know. One of those things is that they don’t create any “anisotropic stress” — basically, a force that pulls different sides of things in different directions. Given that extremely reasonable assumption, GR makes a powerful prediction: there is a certain amount of curvature associated with space, and a certain amount of curvature associated with time, and those two things should be equal. (The space-space and time-time potentials φ and ψ of Newtonian gauge, for you experts.) The curvature of space tells you how meter sticks are distorted relative to each other as they move from place to place, while the curvature of time tells you how clocks at different locations seem to run at different rates. The prediction that they are equal is testable: you can try to measure both forms of curvature and divide one by the other. The parameter η in the abstract is the ratio of the space curvature to the time curvature; if GR is right, the answer should be one.

There is a straightforward way, in principle, to measure these two types of curvature. A slowly-moving object (like a planet moving around the Sun) is influenced by the curvature of time, but not by the curvature of space. (That sounds backwards, but keep in mind that “slowly-moving” is equivalent to “moves more through time than through space,” so the curvature of time is more important.) But light, which moves as fast as you can, is pushed around equally by the two types of curvature. So all you have to do is, for example, compare the gravitational field felt by slowly-moving objects to that felt by a passing light ray. GR predicts that they should, in a well-defined sense, be the same.

We’ve done this in the Solar System, of course, and everything is fine. But it’s always possible that some deviation from Einstein shows up at much larger distance and weaker gravitational fields than we have access to in our local neighborhood. That’s basically what Rachel’s paper does, considering different measures of the statistical properties of large-scale structure and comparing them to the predictions of a phenomenological model of the gravitational field. A crucial role is played by gravitational lensing, since that’s where the deflection of light comes in.

And here is the answer: the likelihood, given the data, for different values of 1/η, the ratio of the time curvature to the space curvature. The GR prediction is at 1, but the data show a pronounced peak between 3 and 4, and strongly disfavor the GR prediction. If both the data and the analysis are okay, there would be less than a 2% chance of obtaining this result. Not as good as 0.01%, but still pretty good.

bean-eta

So what are we supposed to make of this? Don’t get me wrong: I’m not ready to bet against Einstein, at least not yet. Mostly my pro-Einstein prejudice comes from long experience trying to come up with alternative theories of gravity that are simultaneously logically sensible and observationally consistent; it’s just very hard to do. But more generally, good scientists naturally have a strong suspicion of any claimed observational result that purports to overthrow an extremely well-established theory. That’s just common sense, not hidebound establishmentarianism; most such anomalies eventually go away.

But that doesn’t mean that you ignore anomalies; you just treat them with caution. In this case, there could be an unrecognized systematic error in the data set, or a subtle error in the analysis. Given 1:1 odds, that’s certainly where the smart money would bet right now. It’s also possible that the fault lies with dark matter or dark energy, not with gravity — but it’s hard to see how that could work, to be honest. Happily, it’s an empirical question — more data and more analysis will either reinforce the result, or make it go away. After all, some anomalies turn out to be frighteningly real. This one is worth taking seriously, to say the least.

CATEGORIZED UNDER: arxiv, Science
  • http://backreaction.blogspot.com/ Bee

    Interesting. Thanks for pointing out!

  • True_Q

    Fascinating!

  • NewEnglandBob

    Too bad the peek isn’t a little to the left. It would have been cool if it had been a factor of Pi.

  • Pingback: IanHuston.net — Latest From FriendFeed this week

  • Malo Juevo

    I’m afraid that all appearances suggest that this result is wrong in at least one important respect. The paper has a number of obvious problems in its use of statistics. The most salient (in terms of whether the paper’s claims are valid) concerns the chi squared values. The chi squared for pure GR is greater than 3000. The number of degrees of freedom is not given, but either this fit is extremely bad, or it has so many free parameters that one cannot learn anything useful from varying only one parameter at a time. If the fit is really bad, that’s quite interesting, and it does pose a problem for GR. Adding possible variation of eta changes the goodness of fit by an amount that is tiny in comparison, but when the fit is already this bad, it doesn’t tell you anything useful. You can often improve the chi squared per degree of freedom by adding a new fit parameter, but that’s not meaningful if neither fit is anywhere close to the data.

  • http://krasner.lbl.gov/alexie/ Alexie Leauthaud

    Hi – I am working on the weak lensing data-set that drives these results (the COSMOS weak lensing data). I just wanted to mention that Rachel Beans results are based on a paper that we published in 2007 (Massey et al. 2007). Since 2007, there have been many changes in our data-set. Firstly and perhaps most importantly, our source redshift distribution has changed with the addition of deep Near Infra Red and U band data. The high redshift distribution has changed quite a bit and this will obvioulsy impact our tompography results. Secondly, we have been working since 2007 to reduce systematic effects in the data (shear calibration, PSF correction and we have a new and improved way of dealing with Charge Transfer Efficiency). Therefore, I would not be surprised if these results were to change with our new improved weak lensing data. We have not yet calculated if the new data will go in the same direction as Rachel Beans result or not, but we are working towards publishing our new results as soon as possible – so keep an eye out on astro-ph for an update on this result!

  • http://www.stanford.edu/~dapple/ DougA

    To follow up on #6 from Alexie – It appears that most of the the power in this result comes from the high redshift bin in COSMOS, and will be most susceptible to the systematic changes mentioned in that post.

    Also, there is another, older paper from Daniel et al, 2009 [http://adsabs.harvard.edu/abs/2009PhRvD..80b3532D] that uses the same datasets, except substituting CFHTLS for COSMOS, and apparently the same code (CosmoMC), but finds everything perfectly consistent with GR.

  • Rachel Mandelbaum

    An addendum to Alexie’s comment, for the non-lensers:
    The significance of a change in the redshift distribution at the high end is that the GR predictions for each redshift slice will increase if the new results suggest that the galaxies are actually at higher redshift than was originally assumed. So, if the change in the “high redshift distribution” that Alexie mentions is in the direction of putting the sources at higher redshift, then this change could reduce/eliminate the current tension between the data and GR.

    Another point, in case anyone wonders why the COSMOS team did not find this tension with GR:
    My take is that this relates to the inclusion of the other data in Rachel Bean’s analysis. In the COSMOS Massey et al. (2007) paper, the best-fit power spectrum amplitude sigma_8 was found to be fairly high. If the other data that Rachel Bean includes tend to pull sigma_8 to lower values, then the lensing signal in this highest redshift slice will appear to be too high, and modifications of the theory of gravity are one way to reconcile the inconsistency with theoretical predictions. This is just my take on it; perhaps someone who is more familiar with the data and/or analysis can comment.

  • http://usersguidetotheuniverse.com Dave Goldberg

    When I first saw Rachel’s paper on the arXiv, my initial reaction was to scan through it for any discussion of how she treats nonlinearities in the growth of structure. Unless I’m missing something, she only does one correction, mentioned briefly at the beginning of Sec. 2.

    For ISW measurements, she is probably safe doing so. Smith et al. estimated that for l < 100, nonlinearities should contribute less than a 10% effect. Likewise for cosmic shear, depending on the scale. For galaxy auto-correlation functions, I'm a bit more skeptical, especially since the power spectrum model is explicitly based on simulations. The beauty of her approach is that this is meant to be a clean measure of spatial versus temporal components of the metric. These terms can only be determined cleanly in the linear regime, and it's not obvious that this is completely applicable here.

    Still, a very interesting (and provocative) paper.

  • http://coraifeartaigh.wordpress.com Cormac O Raifeartaigh

    interesting post and superb discussion on all sides. if only the skptics knew this is how science is really done…

  • Sili

    It’s always fascinating when people think of new ways of looking at the data. It sounds as though this method is likely to give an indication if there is something there.

    It’ll be interesting to see the analysis done on the new and improved datasets.

    As an ignorant layman I’m not happy about the apparent shoulder at 1. It may be weak, but it makes it look like there’re two signals in there.

  • http://www.astro.cornell.edu/~rbean Rachel Bean

    Thanks to Sean for posting about the paper. I thought I’d make some quick replies to these few posts. Rachel Mandelbaum is spot on about the tension between the weak lensing data and the other datasets having being seen before as a difference in the preferred values of sigma_8; the difference between the two potentials just allows that tension to relax and the lensing data to be well fit by the bestfit set of parameters that the other datasets prefer. DougA is also correct that Daniel et al looked at a similar effect, however they modeled it as evolving as (1+z)^(-3) i.e. looking for an effect that became more important with decreasing redshift. My analysis finds no evidence for a deviation from GR at z<1, consistent with their results. Dave Goldberg mentioned the modeling of non-linearities, that can be a big source of systematic uncertainty on smaller scales. I used the Smith et al fit for the lensing data, however because the data here is on reasonably large scales the non-linear correction doesn't affect the result. As Sean alludes to, I've taken the systematic errors in the datasets at face value, unmodeled systematics would, no doubt, have an impact on the result.

  • http://www.physics.ucsb.edu/~brewer/ Brendon Brewer

    That’s not a very statistically significant result. People use the words “98% confidence level” in order to sound authoritative, but the way they are calculated, they do NOT mean “98% posterior probability”. It looks like in this example, what they meant is “p-value of 0.02″, which is not very strong evidence at all (assuming p-values are the relevant statistic, which we all know they’re not, really). Testing GR is important, but I doubt very much that this is a detection of a GR violation.

  • Mike

    Interesting results!

    Nevertheless could this deviation from GR on largest scales (if it is real) be related to the findings of Kashlinsky et al. 2008 who found some evidence for large scale bulk flows via the kinetic SZ effect?

  • http://www.astro.cornell.edu/~rbean Rachel Bean

    I should also reply to the chi^2 comment of Malo Juevo. The paper quotes -2ln(likelihood)= chi^2 +2ln(det(Cov)) so it’s not the chi^2 per se (because includes the normalization of the determinant of the covariance matrix). However, while you can’t do -2ln(likelihood)/degree of freedom as a measure of fit, the change in -2ln(likelihood) does give a measure of the improvement in fit.

  • http://www.astro.yale.edu/people/adam-solomon Adam Solomon

    Perhaps I’m betraying my personal biases here, but if this result *is* real, do the theorists have an idea which (if any) of the “well-established” modifications to GR would agree with these data? Rachel’s paper cites a lot of the relevant literature but it doesn’t look like there’s a direct comparison to any specific theories.

    Mike, I’m no expert on this (my senior project is on the SZ effect though, so ask me in a few months ;) ) but I believe Kashlinsky’s paper claims to find evidence for a specific anisotropy, rather than some effect of gravity in general (if we believe the Kashlinsky result!).

    • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

      Adam, I don’t think there are any obvious candidates. As far as I can tell, this effect wasn’t specifically predicted by any of the models I’m familiar with. I’m sure we’ll hear otherwise if the effect persists.

  • PTM

    Interesting, if this effect is real could it mean that dark energy is the temporal and dark matter spatial part of a single field which fills the Universe?

    Dark energy and dark matter dominate on large scales, they constitute approximately 72% and 23% of all mass-energy in the Universe. So dark energy is 3.13 times more abundant then dark matter, if one were related to temporal curvature and the other to spatial one the result would match well with this distribution. Is such an idea reasonable and if not why?

    Though in such case shouldn’t temporal curvature be exactly 3 times larger then spatial one due to spatial energy being distributed among three dimensions while temporal only one?

  • Pingback: Link roundup to watch out for (13th October, 2009) | Geek Feminism Blog

  • Tony Pan

    Exciting result! But as Alexie mentioned, there’s been changes to the COSMOS data set from 2007, including better correction of systematics. Will be definitely interesting to see the same analysis with updated data.

  • Phil Warnell

    If these results were to hold up under the scrutiny of analysis and addition data, what affect if any would this have upon the quest for a quantum gravity theory? That is, would such a result be more indicative of there being more dimensions (degrees of freedom), then 3 + 1 or perhaps less; or simply not being relevant at all? I ask as it seems to me that if nature is shown to have a temporal bias, then it should have some implication in this regard.

  • Pingback: Tales from the Tubes — 13/​10/​09 | Young Australian Skeptics

  • Pingback: Interesting Reading #345 – The Blogs at HowStuffWorks

  • http://tgd.wippiespace.com/public_html/index.html Matti Pitkänen

    My proposal

    My proposal is that eta=1/3 could be understood if the
    perturbations are not that of matter (visible or dark) but of dark energy density. This would imply that four-volume is conserved in the perturbation implying eta=1/3 for scalar perturbations.

  • http://eternal-cartesian.blogspot.com/ Cartesian

    I think that some layers around planets could have an effect on the lensing effect. See :
    http://eternal-cartesian.blogspot.com/2009/10/article-8-first-part.html

  • Marton Trencseni

    The paper says that most of the power of the result comes from COSMOS 1 < z < 2 data, while the statistics come from COSMOMC runs which include the COSMOS data (through a library written by J. Lesgourgues).

    Given that, is the basic point of the paper that allowing eta to vary produces COSMOMC runs which fit the COSMOS data better than regular eta=1 runs?

  • http://backreaction.blogspot.com/ Bee

    Okay, so 98% isn’t great, but assuming this indication holds up in further analysis, it suggests to me there’s information in the lensing data that’s hard to come by with LambdaCDM, while the other data sets aren’t so sensitive to it.

    Incidentally, since there’s some lensing folks around, a question I think I asked at this blog previously: is it true you’re assuming rho is positive definite for the data analysis?

    I think people have pointed out LambdaCMD doesn’t work very well with the large scale structure for quite a while, though less quantitatively, see eg http://arxiv.org/abs/0811.4684

    I’m not a cosmologist, so for me it’s hard to tell how serious to take these “puzzles” (esp. the one with the voids). I’ve talked to several people in the field and they commonly think it’s a lacking understanding of astrophysical effects, or some think it’s a numerical weakness, and that given more effort, the data would fit the model.

  • uncle sam

    Well, this sure is stunning if it pans out. No one here seems to have asked, what other tests would show such an asymmetry between space and time curvature? My impression is, this would show up in simple things like radar delay tests in the solar system. And, whither the equivalence principle? But AFAIK they all work OK.

  • blanton

    Bee -

    Skimming off the rails here, but definitely don’t take
    the “void” issue seriously.

    When we’ve tried to carefully measure the mass distribution
    of dwarf galaxies in empty regions using homogeneous surveys,
    we’ve found no discrepancy with CDM, certainly not a statistically
    significant one.

  • Aaron Sheldon

    I would like to see some more of the fitted parameters published. What are the joint confidence intervals of all six fitted parameters (note: not the marginal intervals)? A marginalized statistic at the 98% confidence level may actually be in a volume of the joint parameter space that does not have as much statistical significance.

    Also did they fit the (six total) parameters of the three Gaussian nuisance variables or did they marginalize over all possible values of the parameters, and in that case what was the prior?

  • Aaron Sheldon

    Sorry one more point of note on the statistical techniques:

    A properly configured MCMC is driven by a Central Limit Theorem. So regardless of whether the hypothesized distributions contains the correct sampling distribution of the data, the MCMC process will converge in distribution to a single answer, as long as the first two moments of the physical data are finite. This single answer may very well be wrong if you have chosen the wrong family of hypothetical distributions.

  • Pingback: Black Belt Bayesian » A New Challenge to 98% Confidence

  • Pingback: A New Challenge to Einstein? | bootlegers101 Magazine

  • John R Ramsden

    @18 (PTM) That sounds a damned clever idea, and I bet it’s the explanation (assuming their calculations haven’t already taken these ratios into effect).

  • Doug

    Bee – the assumption that there’s no such thing as negative mass (and thus negative rho) is fairly common, but usually doesn’t apply to weak lensing studies. This is because one measures shear (in the weak lensing limit at least), which is one set of second derivatives of the surface potential, and rho is related to the convergence, which is another set of second derivatives of the surface potential.

    So to get to rho, you effectively combine various derivatives of the shear to get derivatives of rho, then integrate, leaving an unknown constant of integration. This constant is basically the mean rho at the edge of your data field, so you’re only measuring mass fluctuations relative to some mean level.

    As weak lensing is also very noisy, you often see large negative signals in the density maps (much larger than any reasonable value for the mean rho at the edge of your data), which are normally interpreted as noise fluctuations caused by the intrinsic ellipticity of the background galaxies rather than an actual region of negative mass. Estimating the noise by various tricks (such as rotating all of your galaxy ellipticities by 45 degrees and redoing the measurement) usually agrees that the large negative regions are most likely noise.

    Cosmic shear is usually measured using 2 point correlation functions, which are combined to give various statistics. Most of the statistics wind up being effectively compensated filters, designed to give 0 signal in a region of flat density distribution regardless of what the actual value of the density is (again, in the weak limit).

  • Aaron Sheldon

    Okay, I have to interrupt again to give a simple example of the dangerous waters of statistical significance, and to show that MCMC is not a silver bullet.

    Consider sampling data from a simple mono-exponential distribution, then fitting the data to a bi-exponential distribution using MCMC. This will yield a posterior on the three parameters of the bi-exponential distribution. But the posterior won’t be maximized around a point where the parameters of one of the exponentials is zero (as would be hoped for in the logic of the analysis of the cited paper). Rather the mean and variance of the bi-exponential distribution will equal the mean of the mono-exponetial distribution, and worse, you can drive the significance arbitrarily close to 1 by adding more sample data.

    This is a general property that can be proven of all cases where a sufficient statistic of the hypothesized distributions has a well defined Fisher Information Matrix in the sampling distribution. It is a significant danger when adding more parameters to a model in the hopes of statistically testing their triviality.

  • Pingback: Occam’s Machete » Blog Archive » General Relativity is Wrong?

  • http://backreaction.blogspot.com/ Bee

    Blanton,

    Thanks! Is there a reference you could point me to?

    Doug,

    Yes, that’s exactly what I was referring to. So what happens to the “noise” for the further analysis of the data? Is it weeded out? “Estimating the noise by various tricks … usually agrees that the large negative regions are most likely noise.” sounds reasonable at first, but suspiciously like a case of confirmation bias at second.

    Best,

    B.

  • Michael Kingsford Gray

    How might a mere plebean access the body of the paper, rather than just a header or précis?

  • Rien

    Click “PDF” under “Download” up in the right corner at http://arxiv.org/abs/0909.3853

  • http://hollylisle.com/ Holly Lisle

    I’m looking at this from a writer’s perspective. I know this is speculative, I get that data sets affect results, and I understand that GR weighs in as the theory with best odds of being right.

    However, if the curvature of space and time might be unequal, what effect would this variance have on the universe? What part of this effect might intersect with humans?

  • Pingback: Current status of the concordance model « Antimatter

  • Pingback: Susan Pinochet (sdp) 's status on Wednesday, 14-Oct-09 19:29:22 UTC - Identi.ca

  • chubaka

    Sean,

    In the framework of your effort for unpack the basic idea
    it is possible to suggest that the result of Raquel Bean correspond to something like Horava Gravity in the sense that time and space scale differently?

    Can we read from her result the different dynamical exponent for time and space?

  • Pingback: 14 October 09 (pm) « blueollie

  • Pingback: Einstein Wrong? : Mormon Metaphysics

  • http://www.canonicalscience.org Juan R. González-Álvarez

    Another challenge to general relativity

    It is worth to start remarking that binary pulsar tests are also challenging general relativity. First, the same tests are also passed out by alternative theories like nonlinear field theory and the recent relational theory (2008: Grav. and Cosm. 14(1), 41—52).

    Second, recent works —also presented in conference PPC-08— point out the possibility that the discrepancy of 0.85% between the general relativity prediction and observation can be explained by a nonlinear field theory, which predicts extra radiation of 0.735% thanks to novel radiation mechanisms not available in general relativity.

    I want not to discuss about if GR is aesthetically compelling or not. Some people thinks GR is the most beautiful of the theories of physics. Others strongly disagree and consider that the nongeometrical formulations are more beautiful —particle physicists as Feynman and Weinberg have stated their preference by nongeometrical formulation for gravity—. I agree that nongeometrical formulations are more beautiful. But the important question is are they more useful as well?

    This important question has been answered in a report presented some few days ago, which rigorously analyzes the geometrical formulation of general relativity and compares —as never before was done— with another five theories of gravity in mainstream journals. The results are somehow surprising: (i) the geometrical formulation behaves poor than nongeometrical ones —the myth of equivalence of both formulations is showed—, and (ii) the deficiencies of the geometrical formulation are the cause of some observational discrepancies —there is a section in the Report specifically devoted to cosmological discrepancies—.

    The implications for quantum gravity are deep. Beyond the profound cultural divide between the relativity and the particle physicists’ community in dealing with spacetime, this reports shows that field-theoretic approaches to gravity over a flat background are more correct than attempts like loop theory —deeply rooted into the geometrical language of general relativity—.

    It is not strange that experts as M. Pavsic (author of the book “The Landscape of Theoretical Physics: A Global View; From Point Particles to the Brane World and Beyond, in Search of a Unifying Principle”) have praised this work, as reported in some news

    http://www.canonicalscience.org/en/publicationzone/canonicalsciencereports/20092.html

    http://www.geskka.com/articles/categories/Space-science/

    http://digg.com/d316f5a

  • http://vixra.org/abs/0907.0018 Peter Fred

    Sean writes:
    “But more generally, good scientists naturally have a strong suspicion of any claimed observational result that purports to overthrow an extremely well-established theory. That’s just common sense, not hidebound establishmentarianism; most such anomalies eventually go away.”

    What am I missing? Why am I alone in considering the flat rotation curves of galaxies as representing a serious anomaly. If dark matter is detected by some means other than gravitationally , then the anomaly of flat rotation curves should rightfully ” go away”.

    I believe that a multi-team, multi-million dollar effort has been underway at least since 1988 looking for non-gravitational evidence for the dark matter. Even though the need for dark energy concept has only been around for ten years, it raises more serious theoretical difficulties than does the dark matter concept.

    So when do good scientists consider the fact the flat rotation curves and the fact of cosmic acceleration a serious anomaly?

    I have a hard time thinking that scientist are really “good scientists” if they do not hold the view, at this late stage, that the need for dark matter and dark energy represent in some sense a serious anomaly.

  • eric gisse

    “Second, recent works —also presented in conference PPC-08— point out the possibility that the discrepancy of 0.85% between the general relativity prediction and observation can be explained by a nonlinear field theory, which predicts extra radiation of 0.735% thanks to novel radiation mechanisms not available in general relativity.”

    This assertion has no support in the literature.

    Plus I greatly enjoy your presence here given your opinions about Carroll.

  • http://www.canonicalscience.org Juan R. González-Álvarez

    To Eric Gisse,

    Your ill-informed assertions about PPC-08 and your ad hominems were already replied in sci.physics.research until moderators there blocked you from posting more in the thread:

    http://groups.google.com/group/sci.physics.research/msg/0bca9684bc5e0a3e

    http://groups.google.com/group/sci.physics.research/msg/bee924193a0a5b68

    http://groups.google.com/group/sci.physics.research/msg/b6c4269a42ddea97

    http://groups.google.com/group/sci.physics.research/msg/b2384376b4c5b02c

    (…)

  • http://www.canonicalscience.org Juan R. González-Álvarez

    To Peter Fred:

    I share the opinion of several of my colleagues that dark energy and dark matter are the aethers of the 21st century. We need a new theory eliminating both from physics.

    We already have a theory that explains the “rotation curves” with an accuracy do not matched by dark matter theories

    http://www.astro.umd.edu/~ssm/mond/fit_compare.html

    Moreover, the theory is predictive and its predictions have been all confirmed

    http://www.astro.umd.edu/~ssm/mond/mondvsDM.html

    http://www.astro.umd.edu/~ssm/mond/mondpred.html

    http://www.astro.umd.edu/~ssm/mond/CMB1.html

    It seems that the theory continues providing satisfactory predictions

    http://arxiv.org/abs/0909.5184

    The goal here is to add relativistic corrections to this theory. One popular approach is revised next

    http://arxiv.org/abs/0901.1524

    I advance an alternative model to TeVeS in the above cited report (CSR:2009), with the advantage it can also compute what cannot be obtained by any other available model or theory: we can obtain a_0 and its relation to cosmological a_H, we can obtain the correct order of magnitude of the cosmological constant, we can obtain the cluster mass limit…

  • eric gisse

    Great, wrote up a nice fat comment then stupidly clicked a link to read something so I have to write it aaaaaaaaalllllll over again. Double the snark now!

    Article comments:

    The paper is starting off by imposing a Newtonian perturbation on top of the usual FRW anzatz. I find the choice of eta to be, overall, quite interesting. The forcing of eta being equal to one falls out through the boundary conditions that define the weak field limit, or in this case, perturbation theory on top of FRW.

    The quantity being considered is not the “curvature” of space-time, which is horrifically misleading, but rather the ratio of two potentials used to define perturbation theory on top of the FRW manifold which is typically used to model the large scale universe.

    I think the best way of understanding the quantity being considered is how well the boundary conditions we choose to apply to perturbation theory match observation, which is a whole lot LESS sexy than ‘a new challenge to Einstein’. {Sorry Sean :D }

    What I’m having difficulty with is getting an exact handle on what is being _measured_. We have the ISW effect, and that is sensitive to the time components of the metric ala Shapiro delay, as it quantifies that little bit of gravitational redshift in CMB photons as they traverse anisotropies on their way to Earth. But the ISW effect, to my knowledge, isn’t done on a per-galaxy basis. And the two individual surveys were simply, as far as I know, doing mass surveys of the sky in some certain region looking for redshifts of objects.

    How the hell this translates to a serious test of GR as claimed is something that I’ve been scratching my head on for awhile now. I wonder how circular the data sets are, with the essential cosmic parameters derived from WMAP/2df being fed back into a test of WMAP/2df.

    I also wonder how much of a coincidence it is that the low-z objects end up favoring GR more than the higher-z objects. I further wonder if any effort was made to distinguish between freely traveling objects and objects gravitationally bound. Any test of the expansion theory is going to go to shit if you consider objects in the local group, which is a {loosely} bound system.

    Basically this doesn’t tell us jack beyond ‘WE REQUIRE MORE VESPENE GAS’, I mean , “DATA”.

    Peter Fred: “What am I missing? Why am I alone in considering the flat rotation curves of galaxies as representing a serious anomaly. If dark matter is detected by some means other than gravitationally , then the anomaly of flat rotation curves should rightfully ” go away”.”

    What you are missing are gravitational {macro,micro} lensing observations that directly couple to the {mass,energy} density present in a given volume of space. People who argue that dark matter doesn’t exist have to find creative excuses for why dark matter behave as predicted in that respect.

    Juan: Thanks for being a cosmic jackass by importing an argument you lost onto a medium that has people who are capable of reading for comprehension. Readers interested in the state of the art on the galactic center should read through the overall thread. As I was reduced to explaining things using small words and pantomime to a child with a learning disability, the overall argument should be simple to follow for people who can read for comprehension.

    As for this stupid goddamn argument, this nonsense was dealt with 6 months ago.

    http://groups.google.com/group/sci.physics.research/msg/2c6d68195d99b0c8?dmode=source

    Taganov’s argument is, quite frankly, as full of shit as you are. Why is he, in 2009, citing literature from 1991 when there has been a factor of 3 reduction in error bars since then?

    Could it be because his argument falls to pieces if one reads, say arXiv:astro-ph/0407149v1? I think so, because it becomes remarkably hard to not only butcher statistics and claim a 0.6% _ERROR BAR_ is not just an excess as opposed to an exquisite failure in comprehension of basic error analysis (of which you are complicit by repeating the claim) when an article published 13 years later shows that the error bars are reduced to 0.13% +/- 0.21.

    Class, since statistics seems to be under discussion to some degree, how many standard deviations away is a factor of 3 difference in a measurement when one realizes that an error bar represents 1 standard deviation centered upon the measurement? I’ll leave it as an exercise.

    And Juan, an extra special thank you for once again citing your many-year unpublished draft in an argument. As you have explicitly denied having done. Hurry the fuck up and publish it, as Sean Carroll will probably be highly interested in your opinions of his work in addition to your complete butchering of the subject. If for no other reason than because you invoke his name all the time.

    Hey Sean, since I know you’ll see this, I have a question.Have you heard about Juan’s rather interesting usage of your online lecture notes? Its’ fuuuun to read.

  • http://www.canonicalscience.org Juan R. González-Álvarez

    To Eric Gisse,

    You are repeating the same unfair accusations on Taganov, the same incorrect factors and misguided errors analysis, etc. that were already replied in the sci.physics.research links of my above message (before you were finally blocked by moderators in that thread who rejected your further posts).

    The same moderators recently approved a post about the recent report CSR:20092

    http://groups.google.com/group/sci.physics.research/browse_thread/thread/e27983b2fae018ed#

    ignoring the ad hominems and vitriolic ‘evaluations’ of the report you are doing in last days in several places, including this blog now.

    I will only add that the analysis that you mis-attribute to Taganov were using Weisberg & Taylor (2002) not “literature from 1991″ as you say.

    Also your arXiv:astro-ph/0407149v1 is the reference (Relativistic Binary Pulsar B1913+16: Thirty Years of Observations and Analysis by Weisberg and Taylor) given in

    http://www.canonicalscience.org/en/publicationzone/canonicalsciencereports/20092.html

    As can be easily checked at the bottom part of that page.

    A discussion of binary pulsars is given in page 13 of the report, which also includes quotations from Weisberg and Taylor on the issue.

  • eric gisse

    Taylor (2002) completely disagrees with Taganov’s claims, which might have something to do with why his works on the subject continue to be unpublished. Taylor (2002) and Taylor (2004) put the overall uncertainty in the change in period to be around 0.2%. The number 0.2% is, according to my calculations, a lot *smaller* than 0.7%.

    Now let us see if this test of reading for comprehension can be passed.

    As for your report, NOBODY CAN READ IT. It is password protected. And expecting people to pay you money to substantiate your arguments is high order stupidity.

  • http://www.canonicalscience.org Juan R. González-Álvarez

    To Eric Gisse,

    Since the unfair accusations on Taganov are the same and since you continue confused about ‘calculations’ in the same way, evidently the corrections are the same as were given to you in the next spr links

    http://groups.google.com/group/sci.physics.research/msg/0bca9684bc5e0a3e

    http://groups.google.com/group/sci.physics.research/msg/bee924193a0a5b68

    http://groups.google.com/group/sci.physics.research/msg/b6c4269a42ddea97

    http://groups.google.com/group/sci.physics.research/msg/b2384376b4c5b02c

    (…)

    People can read the entire thread and see that you also submitted unfair accusations about top journals and other people, including an expert in black holes who you accused of ignoring last years observations for promoting obscure agendas. Nasty enough, good the moderators blocked you!

    I will not reply to you more about this issue

    My apologies to Sean Caroll and rest of readers for this episode with Eric!

  • eric gisse

    Reading for comprehension is an obscure agenda?

  • Pingback: Is General Relativity Wrong? | Good, Bad, and Bogus

  • Pingback: Rethinking relativity: Is time out of joint? – space – 21 October 2009 – New Scientist «

  • Pingback: ¿Se equivocó Einstein después de todo? | Maikelnai's blog

  • Pingback: GR : Is Time out of Frame? « The Abyss Of the Unknown

  • Jacques

    I am just an interested layman, reading these posts to try better to understand present-day science. What I want to know is – who the hell is Eric Gisse, and what is his problem, exactly? Was he dropped on his head when a baby?

  • Antonio A. Abad

    Crazy but I have a theory that explains everything. To start and to make things simple, I say Newton got it in reverse. Gravity is not a pulling action, it is a pushing action. The dark matter pushes everything together like a glue. Think of it like our atmosphere.

    If you start extending this theory, it can bring you to the big bang, explain the common denominator of the electricity, gravity, light, fire, heat, cold. Furthermore it can show that time is not what we think of it but it is rather a state of matter – constantly changing. Yes, one can go back in time or forward but really impossible because time is really always the “present” for every one.

    I can explain further

  • Garth A Barber

    The result is interesting, and it may be pertinent to mention the value eta = 1/3, comes naturally from the 2002 version of Self Creation Cosmology.

    i.e. when the Robertson parameters alpha = 1 and gamma = 1/3.

    Though this would seem not to be the case for z < 1

    http://arxiv.org/abs/gr-qc/0302088 (eq. 60).

    Also 'A New Self Creation Cosmology', Astrophysics and Space Science 282, 4, pp 683-731.

  • Dov Henis

    Cosmic Energy-Mass Evolution In A simple Understandable Format

    Deciphering Life’s Regulatory Code

    To : Robert P. Zinzen, EMBL Heidelberg

    Re : “Deciphering the regulatory code”

    A. From “EMBL scientists take new approach to predict gene expression”
    http://www.embl.de/aboutus/communication_outreach/media_relations/2009/091104_Heidelberg/index.html

    “What’s exciting for me is that this study shows that it is possible to predict when and where genes are expressed, which is a crucial first step towards understanding how regulatory networks drive development”

    B. Organism’s behaviour, its reactions to its environments, are “regulatory networks”?

    The above statement by Furlong, translated to 22nd century comprehension, amounts to:

    What’s exciting is that this study shows that it is possible to predict when and where organisms react to their environments, which is a crucial first step towards understanding how evolution proceeds.

    C. Please consider the following suggestions of the origin and nature of life and organisms, and of the origin and nature of cosmic and life evolution

    - Genes, Earth’s primal organisms, and all their take-off organisms – Life in general – are but one of the cosmic forms of mass, of constrained energy formats.

    - The on-going cosmic mass-to-energy reversion since the Big-Bang inflation is resisted by mass, this resistance being the archtype of selection for survival by all forms of mass, including life.

    - The mode of genes’, Earth’s primal organisms, response to the cultural feed-back signals reaching them from their upper stratum take-off organism is “replicate without change” or “replicate with change”. “Replicate with change” is selected in case of proven augmented energy constrainment by the the new generation, this being “better survival”. This mode of Life’s normal evolution is the mode of energy-mass evolution universally.

    Suggesting for your consideration,

    Dov Henis
    (Comments From The 22nd Century)
    Updated Life’s Manifest May 2009
    http://www.the-scientist.com/community/posts/list/140/122.page#2321
    Implications Of E=Total[m(1 + D)]
    http://www.the-scientist.com/community/posts/list/180/122.page#3108

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »