# A New Challenge to Einstein?

General relativity, Einstein’s theory of gravity and spacetime, has been pretty successful over the years. It’s passed numerous tests in the Solar System, scored a Nobel-worthy victory with the binary pulsar, and gets the right answer even when extrapolated back to the first one second after the Big Bang. But no scientific theory is sacred. Even though GR is both aesthetically compelling and an unquestioned empirical success, it’s our job as scientists to keep probing it in different ways. Especially when it comes to astrophysics, where we need dark matter and dark energy to explain what we see, it makes sense to put Einstein to the most stringent tests we can devise.

So here is a new such test, courtesy of Rachel Bean of Cornell. She combines a suite of cosmological data, especially measurements of weak gravitational lensing from the Hubble Space Telescope, to see whether GR correctly describes the behavior of large-scale structure in the universe. And the surprising thing is — it doesn’t. At the 98% confidence level, Rachel finds that general relativity is inconsistent with the data. I’m not sure why we haven’t been reading about this in the science media or even on other blogs — it’s certainly a newsworthy result. Admittedly, the smart money is still that there is some tricky thing that hasn’t yet been noticed and Einstein will eventually come through the victor, but this is serious work by a respected cosmologist. Either the result is wrong, and we should be working hard to find out why, or it’s right, and we’re on the cusp of a revolution.

Here is the abstract:

A weak lensing detection of a deviation from General Relativity on cosmic scales

Authors: Rachel BeanAbstract: We consider evidence for deviations from General Relativity (GR) in the growth of large scale structure, using two parameters, γ and η, to quantify the modification. We consider the Integrated Sachs-Wolfe effect (ISW) in the WMAP Cosmic Microwave Background data, the cross-correlation between the ISW and galaxy distributions from 2MASS and SDSS surveys, and the weak lensing shear field from the Hubble Space Telescope’s COSMOS survey along with measurements of the cosmic expansion history. We find current data, driven by the COSMOS weak lensing measurements, disfavors GR on cosmic scales, preferring η < 1 at 1 <

z< 2 at the 98% significance level.

Let’s see if we can’t unpack the basic idea. The real problem in testing GR in cosmology is that *any* particular kind of spacetime curvature can be a solution to Einstein’s theory — all you need are the right sources of matter and energy. So in order to do a real test, you need to have some confidence that you understand what is creating the gravitational field — in the Solar System it’s the Sun and planets, in the binary pulsar it’s two neutron stars, and in the early universe it’s radiation. For large-scale structure things are a bit less clear — there’s ordinary matter, and dark matter, and of course dark energy.

Nevertheless, even though there are some things we don’t know about dark matter and dark energy, there are some things we think we do know. One of those things is that they don’t create any “anisotropic stress” — basically, a force that pulls different sides of things in different directions. Given that extremely reasonable assumption, GR makes a powerful prediction: there is a certain amount of curvature associated with *space*, and a certain amount of curvature associated with *time*, and those two things should be equal. (The space-space and time-time potentials φ and ψ of Newtonian gauge, for you experts.) The curvature of space tells you how meter sticks are distorted relative to each other as they move from place to place, while the curvature of time tells you how clocks at different locations seem to run at different rates. The prediction that they are equal is testable: you can try to measure both forms of curvature and divide one by the other. The parameter η in the abstract is the ratio of the space curvature to the time curvature; if GR is right, the answer should be one.

There is a straightforward way, in principle, to measure these two types of curvature. A slowly-moving object (like a planet moving around the Sun) is influenced by the curvature of time, but not by the curvature of space. (That sounds backwards, but keep in mind that “slowly-moving” is equivalent to “moves more through time than through space,” so the curvature of time is more important.) But light, which moves as fast as you can, is pushed around equally by the two types of curvature. So all you have to do is, for example, compare the gravitational field felt by slowly-moving objects to that felt by a passing light ray. GR predicts that they should, in a well-defined sense, be the same.

We’ve done this in the Solar System, of course, and everything is fine. But it’s always possible that some deviation from Einstein shows up at much larger distance and weaker gravitational fields than we have access to in our local neighborhood. That’s basically what Rachel’s paper does, considering different measures of the statistical properties of large-scale structure and comparing them to the predictions of a phenomenological model of the gravitational field. A crucial role is played by gravitational lensing, since that’s where the deflection of light comes in.

And here is the answer: the likelihood, given the data, for different values of 1/η, the ratio of the time curvature to the space curvature. The GR prediction is at 1, but the data show a pronounced peak between 3 and 4, and strongly disfavor the GR prediction. If both the data and the analysis are okay, there would be less than a 2% chance of obtaining this result. Not as good as 0.01%, but still pretty good.

So what are we supposed to make of this? Don’t get me wrong: I’m not ready to bet against Einstein, at least not yet. Mostly my pro-Einstein prejudice comes from long experience trying to come up with alternative theories of gravity that are simultaneously logically sensible and observationally consistent; it’s just very hard to do. But more generally, good scientists naturally have a strong suspicion of any claimed observational result that purports to overthrow an extremely well-established theory. That’s just common sense, not hidebound establishmentarianism; most such anomalies eventually go away.

But that doesn’t mean that you *ignore* anomalies; you just treat them with caution. In this case, there could be an unrecognized systematic error in the data set, or a subtle error in the analysis. Given 1:1 odds, that’s certainly where the smart money would bet right now. It’s also possible that the fault lies with dark matter or dark energy, not with gravity — but it’s hard to see how that could work, to be honest. Happily, it’s an empirical question — more data and more analysis will either reinforce the result, or make it go away. After all, some anomalies turn out to be frighteningly real. This one is worth taking seriously, to say the least.