A New CMB Anomaly?

By Sean Carroll | July 17, 2008 5:08 pm

One of the important features of the universe around us is that, on sufficiently large scales, it looks pretty much the same in every direction — “isotropy,” in cosmology lingo. There is no preferred direction to space, in which the universe would look different than in the perpendicular directions. The most compelling evidence for large-scale isotropy comes from the Cosmic Microwave Background (CMB), the leftover radiation from the Big Bang. It’s not perfectly isotropic, of course — there are tiny fluctuations in temperature, which are pretty important; they arise from fluctuations in the density, which grow under the influence of gravity into the galaxies and clusters we see today. Here they are, as measured by the WMAP satellite.

Nevertheless, there is a subtle way for the universe to break isotropy and have a preferred direction: if the tiny observed perturbations somehow have a different character in one direction than in others. The problem is, there are a lot of ways this could happen, and there is a huge amount of data involved with a map of the entire CMB sky. A tiny effect could be lurking there, and be hard to see; or we could see a hint of it, and it would be hard to be sure it wasn’t just a statistical fluke.

In fact, at least three such instances of apparent large-scale anisotropies have been claimed. One is the “axis of evil” — if you look at only the temperature fluctuations on the very largest scales, they seem to be concentrated in a certain plane on the sky. Another is the giant cold spot (or “non-Gaussianity,” if you want to sound like an expert) — the Southern hemisphere seems to have a suspiciously coherent blob of slightly lower than average CMB temperature. And then there is the lopsided universe — the total size of the fluctuations on one half of the sky seems to be slightly larger than on the other half.

All of these purported anomalies in the data, while interesting, are very far from being definitive. Although most people seem to agree that they are features of the data from WMAP, it’s hard to tell whether they are all just statistical flukes, or subtle imperfections in the satellite itself, or contamination by foregrounds (like our own galaxy), or real features of the universe.

Now we seem to have another such anomaly, in which the temperature fluctuations in the CMB aren’t distributed perfectly isotropically across the sky. It comes by way of a new paper by Nicolaas Groeneboom and Hans Kristian Eriksen:

Bayesian analysis of sparse anisotropic universe models and application to the 5-yr WMAP data

Sexy title, eh? Here is the upshot: Groeneboom and Eriksen looked for what experts would call a “quadrupole pattern of statistical anisotropy.” Similar to the lopsided universe effect, where the fluctuations seem to be larger on one side of the sky than the other, this is an “elongated universe” effect — fluctuations are larger along one axis (in both directions) as compared to the perpendicular plane. Here is a representation of the kind of effect we are talking about — not easy to make out, but the fluctuations are supposed to be a bit stronger near the red dots than in the strip in between them.

It’s not a very large signal — “3.8 sigma,” in the jargon of the trade, where 3 sigma basically means “begin to take seriously,” but you might want to get as high as 5 sigma before you say “there definitely seems to be something there.” However, the WMAP data come in different frequencies (V-band and W-band), and the effect seems to be there in both bands. Furthermore, you can look for the effect separately at large angular scales and at small angular scales, and you find it in both cases (with somewhat lower statistical significance, as you might expect). So it’s far from being a gold-plated discovery, but it doesn’t seem to be a complete fluke, either.

Remember, looking for any specific effect is quite a project — there is a lot of data, and the analysis involves manipulating huge matrices, and you have to worry about foregrounds and instrumental effects. So why were these nice folks looking for a power asymmetry along a preferred axis in the sky? Well, you might recall my paper with Lotty Ackerman and Mark Wise, described in the “Anatomy of a Paper” series of blog posts (I, II, III). We were interested in whether the (hypothetical) period of inflation in the early universe might have been anisotropic — expanding just a bit faster in one direction than in the others — and if so, how it would show up in the CMB. What we found was that the natural expectation was a power asymmetry along the preferred axis, and gave a bunch of formulas by which observers could actually look for the effect. That is what Nicolaas and Hans Kristian did, with every expectation that they would establish an upper limit on the size of our predicted effect, which we had labelled g*. But instead, they found it! The data are saying that

$latex g_* = 0.15 pm 0.039,.$

So naturally, Lotty and Mark and I are brushing up on our Swedish in preparation for our upcoming invitations to Stockholm. Okay, not quite. In fact, it’s useful to be very clear about this, given the lessons that were (one hopes) learned in John’s series of posts about Higgs hunting. Namely: small, provocative “signals” such as this happen all the time. It would be completely irresponsible just to take every one of them at face value as telling you something profound about the universe. And the more surprising the result — and this one would be pretty darned surprising — the more skeptical and cautious we have every right to be.

So what are we supposed to think? Certainly not that these guys are just jokers that don’t know how to analyze CMB data; the truth couldn’t be more different. But analyzing data like this is really hard, and other groups will doubtless jump in and do their own analyses, as it should be. It’s certainly possible that there is a small systematic effect in WMAP — “correlated noise” — rather than in the universe. The authors have considered this, of course, and it doesn’t seem to fit the finding very comfortably, but it’s a possibility. The very good news is that the kind of correlated noise one would expect from WMAP (given the pattern it used to scan across the sky) is completely different from that the we would worry about from the upcoming Planck mission, scheduled to launch next year.

Or, of course, we could be learning something deep about the universe. Maybe even that inflation was anisotropic, as Lotty and Mark and I contemplated. Or, perhaps more plausibly, there is some single real effect in the universe that is conspiring to give us all of the tantalizing hints contained in the various anomalies listed above. We don’t know yet. That’s what makes it fun.

  • http://lablemminglounge.blogspot.com/ Lab Lemming

    Will Plank use similar sorts of detectors as WMAP had, or will the instrumentation be as different as possible?

  • King Cynic

    A semi-serious question: How many papers suggesting weird things to look for in the CMB have been written over the years, vs. how many weird things have been found? Does this “3.8 sigma” result include a trials penalty for this effect? (Will a trillion Cosmic Variance bloggers typing out a trillion papers eventually type up the Theory of Everything through pure chance?)

  • http://www.geocities.com/aletawcox/ Sam Cox

    Congratulations in advance Sean! Planck is highly likely to verify this effect to at least 5 sigma. Even if it doesn’t, we will probably learn something just as- or more significant- from your predictions, and this careful field work.

  • Sili

    It’s nice to hear that Planck will be able to give some independent confirmation (and on that note, it would be awesome with a layman’s guide to the detectors on Planck, Wilkinson and Cobe to shine a little light on how independent the measures will be), but is there anything else we can look at?

    The CMB is truly impressive, yes, but to an outsider it feels a bit … ‘thin’. It’s as if much of modern cosmology hangs on just this one map … That’s not the case of course (I do know of the evidence for dark matter and energy), but it would be nifty if any of these effects might show up outside of the CMB.

  • Sili

    Oh, and why wait for an excuse to learn Swedish? One can never speak too many languages!

  • Tom Renbarger

    The Planck low-frequency instrument (LFI) will use similar detector technology to WMAP — high electron mobility transistors (HEMTs). This is the portion of Planck that has spectral overlap with WMAP. The high-frequency instrument (HFI) will use bolometers at its focal plane, which is a pretty different technology. The telescopes designs are fairly similar for technical reasons.

  • manyoso

    I have a very basic question whenever I see these map data of the CMB. I’m sure it is probably simple astronomy, but here goes:

    Why is it an ellipse?

    Precisely what are we looking at that would make it elliptical? Is this a map of the sky (and if so, still… why elliptical) or in some way representative of the entire universe? How can an ellipse represent a map of the entire universe? When looking at this map what is our point of reference? Is this the view from earth system in some manner?

    I guess I’m looking for a big ‘you are here’ spot on the ‘map’ :)

  • http://www.astro.ucla.edu/~wright/cosmolog.htm Ned Wright

    Luckily arXiv.org keeps old versions and WMAP posts to the arXiv on submission so you can still look at section 8.5 of Spergel etal 2006 (astro-ph/0603449v1) and see that the WMAP team looked for an effect like this in the 3 year data. We got Deltachi^2 = 3.4 for ell_max=1 and Deltachi^2 = 8 for ell_max = 2, which are quite consistent with the expected improvement for random data with 3 or 8 new parameters. The %$#@&^! referee made us take it out.

    All of these large angular scale effects will not be improved by Planck. WMAP is cosmic variance limited and Planck does not have a good scan pattern which will limit its performance at low ell.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Ned, this is not a low-ell effect; it’s (supposed to be) at every ell. The fact that Planck has a different scan pattern is the whole reason it will be a good test; the correlated noise will be of a completely different form. (If I understand correctly.)

    manyoso, it’s just a projection of a sphere (in this case, the sky). An ellipse is just convenient.

  • Hans Kristian Eriksen

    Hi Ned,

    I think perhaps you may be a bit confused here. You seem to be referring to the asymmetry feature, for which there indeed was an analysis in Spergel et al., with the results you quote above. However, this new effect is completely different from that, and has nothing to do with the asymmetry. The new effect is “cylinder symmetric”, with an overall quadrupolar patter. See Figure 2 of our paper to get an intuitive feel for what the signal looks like — essentially, the signal is correlated along the plane normal to the preferred direction, and unchanged along the preferred direction. It’s purely an a_lm correlation effect, not a power effect. (Any specific direction in model has identical amounts of power on the two hemispheres.) Note also, as Sean already pointed out, that this effect is *not* a low-l effect, but is seen independently in both l=2,100 and 100-400.

    As far as Planck vs. WMAP goes, this *is* one case where Planck’s scanning strategy will be very useful. As you know, Planck scans on essentially great circles through the ecliptic poles, and these will lie almost perpendicular to the signature found here, not parallel. WMAP’s scanning, on the other hand, is more similar to the signature in question. Further, since the effect is seen on all l’s, Planck will increase the S/N greatly on smaller scales. Most definitely, Planck will do a much better job at measuring (or dismissing) this effect than WMAP.

    So, at the moment, this looks quite interesting — but, as we point out in the paper, caution is warranted with respect to correlated noise. We need proper 5-year noise simulations in order to assess this properly.

    Finally, as far as the Spergel et al. analysis goes, I think it’s safe to assume that the reason that the referee asked for removing this, was simply that the execution of the analysis presented there was flawed on so many levels, and this was demonstrated quickly by two other papers (see Gordon et al. 2007 and Eriksen et al. 2008). Three specific examples are the fact that they 1) neglected to marginalize over monopoles and dipoles; 2) analysis at way too low resolution (Nside=8, lmax=23), resulting in serious underestimation of total significance, since the asymmetry is seen *at least* up to l=40; and 3) the degradation process from Nside=512 to Nside=8 was improperly executed, in that the resulting maps were not properly bandwidth limited, and this compromised the likelihood evaluation. Proper analysis, in which these points were corrected, showed that the asymmetry indeed *is* statistically significant (although marginally), even within the very conservative Bayesian evidence framework. Again, see Eriksen et al. 2008 (astro-ph/0701089) for full details.

    Anyway, even if one happens not to like the asymmetry effect, due to so-called “a-posteriori” arguments, one can never claim that the current anisotropy detection is an “a-posterior” effect. In this case, theoreticians made a specific prediction, and then that very same signature was indeed found in the data. Statistically, the situation is quite clean — and fortunately, more data will make it even clearer.

    The main outstanding issue right now is correlated noise. And only a proper analysis of 5-year simulations can resolve this question.

  • cope

    Do the supposed anisotropies express themselves in any other kind of data (visible, radio, etc.)?

  • cecil kirksey

    As far as the Swedish comment shouldn’t you be in line behind Guth et al since your model depends on inflation? Maybe this year!!! Has there ever been a theoretical cosmologist winning the prize? According to Komatsu et al WMAP data “confirm the inflationary mechanism”. Hmmm. Does this mean that alternatives to inflation are now rejected by the physics community?

    As a retired engineer (radar systems enginner) I find the WMAP instrument very interesting. And the calibration and removal of background data and potential biases very impressive.

    However, one thing bothers me concerning the drawing of conclusions about physical models and their parameters, particularly assigning confidence bounds: The CMB data is fixed. You only have one data set. Yes you can collect more data to improve the SNR but there is still one data set.

    I believe you have expressed a bias in favor of the multi-verse idea in the past. If this idea is sound would you not expect the CMB data then to be just one sample from this hugh 10^500 or whatever sample space? How can anyone realistically talk about confidence levels if this idea is correct?

    Another comment. It seems that all of the baseline data analysis assume Gaussian statistics. If this assumption is not true then drawing any conclusions regarding confidence levels is really suspect correct?

    Just one last comment. At what confidence level must the data reflect in order to place indisputable acceptance of a physical model of the early universe?

    Thanks for your time and I hope to see the annoucement in the fall.

  • Matt

    Man, I only claim to understand about 30% of this and your previous “anatomy of a paper” post, but it’s still damn fun to feel like we’re part of the process! Thanks so much for your updates, Sean!

  • Hans Kristian Eriksen

    Hi Cecil!

    I have two comments to your questions. First, yes, it is a “problem” that we only have one CMB sky. It would definitely be fun to have more :-) However, this isn’t really a big problem for the statistical treatment. In particular, the Bayesian framework is very well suited to handle these kinds of problems. In this language, what you want is the posterior, P(theta|data), where theta is some set of parameters of interest and data are your observations, which is *one* data set. Then, by Bayes theorem you can write

    P(theta|data) = P(data|theta) * P(theta) / P(data)

    In this expression, P(data|theta) is the so-called likelihood, and something you can evaluate for any set of parameters. P(theta) is a prior (which should capture what you already know about theta), and P(data) is the so-called Bayesian evidence, and doesn’t depend on theta at all.

    The best estimate of theta is the value for which P(theta|data) peaks. And this is something you can compute even for one data set — as we did in the present paper. But of course, with more independent universes the uncertainties would be smaller, and that’s always nice. But there’s no fundamental difference between one, two, ten or a thousand — the statistical treatment and interpretation is well defined in either case.

    Then, as far as the assumption of Gaussianity goes, you’re absolutely right that the quoted significances could be somewhat off if the sky is non-Gaussian. However, this would be a *much* bigger surprise — then one would *really* have a big result in hand! So assuming Gaussianity is the conservative approach — it’s the null-hypothesis, so to speak, and all other assumptions would be much more controversial.

  • Lawrence B. Crowell

    The anisotropy doubtlessly has a Legendre type of expansion. These are usually Gaussian, where many of these deviations have some measure of kurtosis —- such as the “big hole” discovered a year ago.

    From the perspective of quantum physics if these results are real the impact is interesting. It might suggest something about deviations from usual quantum theory, which with general relativity might not be completely unexpected. These might be indicators of a back reaction by spacetime to quantum fields weakly coupled to spacetime during inflation.

    Lawrence B. Crowell

  • http://tyrannogenius.blogspot.com Neil B.

    The existence of “anomalies” that could have been due to random fluctuations (“cosmic variance”, indeed) just goes to show how slippery the whole idea and practice of “randomness” is. Unfortunately, we can never know whether a particular happening could have been caused by random factors or not. We can make up, by discretionary fiat, some standard like 95% chance of being chance (I correctly framed it, just making the wording cute) or 99.9% or whatever, but there’s no actual demarcation and no way to tell. Probability can’t be falsified. For example, I could try to “falsify” that a coin was fair by claiming if it came up 1,000 heads in a row how could it be, but that could (and eventually <would) happen.

    All that leads to weird paradoxes of course. For example, IDers could be quite right that there’s only a 1:10^400 chance of atoms coming together in the right way to create life, but in an infinite universe (BTW, is it?) then in every 10^397 or so cubic light years, that would happen anyway. We’d be it, to wonder how could it be. That’s why those arguments against “life being random” may not be any good. (But, not to be confused with claims of varying physical constants.) Who knows, or can know? Max Tegmark likes to play with stuff like that

    There is however a fundamental issue about definition of physical laws: suppose there are many many universes, but they have “the same laws as ours.” Well, in some of them Co60 will decay on average after 22 days instead of about 5 years (monkeys on the typewriter: sauce for the goose …) So, are the laws of physics really “different” there, making a contradiction, or ambiguous, to the extent laws literally are given in terms of empirical results? What if the scientists there used theory to calculate that it “should be” a 5 year half-life, that would be empirically “false” yet they’d be right in principle. Just food for thought.

  • Jason Dick


    If what you said were true, then we couldn’t ever be confident about anything. Any experimental results that we ever obtain are always statistical in nature: there’s always a probability that we’re wrong, either due to statistical or systematic errors. The best that we can do, and what is done on any good analysis, is to place statistical limits upon any such results.

    We can then simply approach the system in question with a Bayesian approach and get some degree of confidence as to what’s going on. With your 1000 coin flips example, for instance, we would compute it as follows. Let’s imagine that we’ve just flipped a coin 1000 times, and come up with 1000 heads. The probability of this happening if the coin is a fair coin is around 1/10^300. And if it isn’t a fair coin? Well, we can’t say exactly, as there are many possible ways for the coin to be unfair, and each will come in with different probabilities. But, due to the large numbers at hand, anything but a completely unfair coin is highly unlikely to ever make it to 1000 heads in a row.

    So, then, we just have to ask the question: what is the prior probability we place on the coin being fair? Upon it only being capable of showing up heads? If, for example, we examine the coin and verify that it has both a heads side and a tails side, we gain confidence that it’s a fair coin and lose confidence that it’s unfair. If we measure its mass distribution and determine that it’s unlikely to prefer to land on one side or the other, then we again gain confidence that it’s a fair coin.

    All that said, because of the probabilities involved, it becomes exceedingly unlikely that we would ever come across a situation where all of the tests indicate that the coin should be fair, and then find 1000 flips in a row that result in heads. The chance of that happening is just so astronomically low as to not bother about.

  • cecil kirksey

    I think Neil was hinting (maybe more than hinting) at the issue I was trying rise. It becomes difficult to evaluate in any meaningful manner the statement “95% confidence in such and such” if in fact that there is a possibility of many realizations of our universe. Since there are no known priors for the many parameters that could define the universe of multiverses it becomes difficult to interpert confidence. That is why I was specifically asking about how the cosmological community accepts the idea of cofidence levels when evaluating one theory against another or defining “new physics”.

    BTW in radar detection theory Baysian models are used frequently but only when realistic priors are available otherwise one ends up arguing about assumptions. Great theory if you have the data.

  • Jason Dick


    There are lots of ways to put priors on different models even in ignorance, though. Typically the best priors to use are those based upon Occam’s Razor: downweight theories that have more parameters. There are different ways of doing this, but the basic idea is just that we need greater statistical significance to gain confidence in a theory that has more parameters.

    In this case, though, I think the only interesting question is whether or not the correlated noise has something to do with the it. Hans and collaborators have already demonstrated that correlated noise can replicate the effect, so it just remains to test the correlated noise with the WMAP analysis. It is also worrying that the apparent axis for this affect appears to coincide quite closely with the poles of the scanning strategy. The statistics are solid, the systematics need to be understood better.

  • jack brennen

    In Sean’s picture there, are the red dots near the NEP and SEP,?

    (I couldn’t find this http://map.gsfc.nasa.gov/media/ContentMedia/990095b.jpg map in Galactic coords.)

    Is this what you meant Sean, by “correlated noise”, that the effect is correlated with the lowest noise points on the map? Or are you referring to a different noise correlation?

  • cecil kirksey

    My question still is: At what confidence level is the cosmological community willing to assume new physics or reject one theoritical model over another? My basic concern with cosmological physics is that there is only one universe so no experiments can be duplicated only measurements repeated by others. But in the case of the CMB there is only one data set, period. Trying to decide what is a statistic variation as opposed to a real effect may not be objective.

  • http://tyrannogenius.blogspot.com Neil B.

    (BTW any comments from any cognoscenti are appreciated.)

    Jason Dick at 9:12 pm:


    If what you said were true, then we couldn’t ever be confident about anything. Any experimental results that we ever obtain are always statistical in nature: there’s always a probability that we’re wrong, either due to statistical or systematic errors. The best that we can do, and what is done on any good analysis, is to place statistical limits upon any such results.

    Well Jason, you’ve formally contradicted yourself but confirmed exactly what I said before – you just don’t realize that I am working the “matter of principle” and you are referencing the matter of degree (talking past each other, not apparently a direct disagreement.) First, I am right and you unwittingly acknowlege it: we can’t ever be sure, it’s a matter of degree. Sure, the chance of there being such a long run is tiny, but in degree not principle, and makes it formally impossible to falsify in Popperian terms. We can be “confident” (in the loose informal sense) but not certain nor can even define a category boundary of “reasonably certain logical type”, because as I said, we make judgment calls about how unlikely something is along a continuum and pick arbitrary pigeonholes thereby.

    But even then, in an infinite universe/s there will be regions or sequences of grotesquely improbable events, and the problems I raised are germane. Or, is “probability” even possible to define in an infinite universe at all, given the incommensurable nature of relative proportions in infinite sets (i.e., the Hilbert Hotel problem etc.?) But I find it odd that I thus shouldn’t worry about the risk of not wearing a seat belt etc. because of the boundary condition that the universe is infinite, and thus contains infinite copies of me/similar wearing or not wearing set belts to disallow any finite frequentist mass comparison – yet even if having a volume of say 10^30,000 light years, this would all be conventionally meaningful instead. I bring this up partly since some critics use infinite statistical measure problems to fend off some of my arguments about the chance of laws and universe behavior having such and such conveniently anthropic or even predictable form etc.

    REM also those problems occur even in a very very huge yet finite universe, as long as there’s plenty of space for extremely odd things like unexpected discernible patterns of radioactive decay, etc, to likely occur.)

  • http://blog.jfitzsimons.org Joe Fitzsimons

    Would Cobe data be sufficiently detailed to show this effect? If so, surely it would make sense to run a similar analysis on that, no?

  • Hans Kristian Eriksen


    No, unfortunately, COBE has much too low sensitivity and angular resolution to be relevant for this analysis. The effect is just about starting to become visible when considering angular scales down to ~2 degrees, and to get strong results, one needs ~0.5 degrees. COBE, on the other hand, had 7 degrees resolution, and much too high noise. So there doesn’t seem to be many alternatives around besides waiting for Planck, really, although it’s possible that galaxy catalogs like SDSS or 2dF could be relevant.

  • http://blog.jfitzsimons.org Joe Fitzsimons

    Well, that’s certainly unfortunate. Thanks for answering my question.

  • cecil kirksey

    One last question. If you incorpoperated Sean”s model into the basic WMAP data analysis and estimated the model parameters as well as the three you used for Sean’s model what effect do you think it would have on the 2008 baseline parameter values and their error estimates (confidence levels)?

  • http://www.irio.co.uk Nicolaas Groeneboom


    If we incorporated this model into, say Cosmomc, we would most likely observe very little changes in the “standard” model parameters. This is because the only parameter affecting the angular power spectrum is the anisotropy amplitude g*, and this would only alter the overall amplitude of the angular power spectrum (sigma_8, if you want). The anisotropy direction itself would not affect the angular power spectrum as it only contains isotropic contributions, and hence not contribute to shifting any of the remaining standard parameters (as all code / theories usually are based on an isotropic theory). Even more, in order to make the code consistent, we “re-scaled” the anisotropy amplitude g* such that it is not degenerate with the amplitude of the power spectrum any more.


  • http://celsetialmechanician.org Celestial mechanician

    CMB 101, what is the wave lenght or frequency of the CMB photons? Are they line spectra like from individual atoms or continuous spectra with a median, mean and mode like molecualr spectra?

  • Christopher Hirata

    Celestial mechanician, The CMB photons have a continuous distribution of wavelengths (a blackbody to be specific). In accordance with Wien’s law, the peak of the distribution is at lambda = 1 mm because the temperature of the CMB is 3 K, but with a broad tail in both directions (especially toward longer wavelengths). Indeed one of the strengths of WMAP is that it can measure the CMB anisotropy at a range of wavelengths (3–13 mm) which helps to distinguish which signal is CMB and which isn’t. The Eriksen et al analysis was performed at both 3 and 5 mm (“W” and “V” bands respectively in microwavese).

  • Christopher Hirata

    Regarding the correlated noise: A common way to measure power spectra if you’re unsure about the correlated noise in your data is to do a cross-power spectrum between different maps (VxW; V1xV2, etc.) or between different years of data in which the noise model is only needed to estimate the error bars and optimize the estimator, and one is not biased by an incorrect noise model. (WMAP 1st year analysis did this.) It seems like the same type of procedure would work here. If one looks at the general quadrupolar anistropy in the primordial power spectrum, it is described by a traceless-symmetric tensor or 5 numbers g_{2M} (M=-2..2). (I realize the Eriksen et al analysis included only the cylindrically-symmetric mode and allowed its direction to vary, so they considered a 3D subspace of the full 5D space of possible quadrupole anisotropies; but nevertheless an analysis that measures all g_{2M}’s should see this anomaly if it’s real.) So if one looks at the covariance matrix of the a_{lm}’s, they now have off-diagonal as well as m-dependent entries proportional to the g_{2M}’s. (There are some cosmology-dependent coefficients in front of g_2M, but if the sky is really statistically isotropic then small errors in these coefficients won’t cause spurious detections as long as we estimate g_2M simultaneously with the C_l’s.) WMAP easily has enough signal/noise for these tests and if the anomaly survives a cross-power analysis then it’s not correlated noise.

    That said, of the possible systematics that could produce an asymmetry in the power spectrum, the first one on my list would be beam ellipticity because WMAP does not hit each pixel at a uniform distribution of angles of attack. (Same will be true, more so, for Planck.) The cross-power analysis won’t solve this problem, ultimately one needs to simulate it using the known beam maps and see what happens.

    Regarding the search in large scale structure: Anthony Pullen (here at Caltech) is working on it, so stay tuned. I’m sure there will also be a lot more poring over WMAP and soon Planck, and probably other LSS data sets shortly after that. I for one find the situation exciting. A few years ago I went to conferences where people presented “explanations” of the low-multipole anomalies that made no predictions that I could hope to see verified at many sigmas in my lifetime. Well, with this particular anomaly I hold out hope that in 5-10 years it will either have gone away or be seen at many sigma in both CMB and LSS …

  • Hans Kristian Eriksen

    Hi Chris!

    A few comments to your posts:

    1) The problem with a cross-correlation analysis (between, say, V and W) for this particular analysis is that it’s very difficult to handle the signal covariance matrix on a cut sky. The only reason it’s sparse in our analysis is that we’re using the Gibbs sampling algorithm, which essentially “fills in” the cut region. I don’t see many alternatives to this, really, if one wants to go to high l’s. However, the Gibbs sampler is an exact likelihood approach, and as such, intrinsically an auto-correlation method; it’s not straightforward to get rid of these auto-correlations, while still have a correct likelihood. Of course, the problem becomes smaller the more independent bands you have, but it’s always going to be there to some extent. But of course, in principle it’s of course possible that one may construct some “pseudo-Cl” approach for this particular model, but right now, I don’t see how.. For the moment, I think the best approach is simply to analyse realistic WMAP5 noise simulations, and see if something similar pops up.

    2) Personally, I don’t think asymmetric beams is relevant for this result. The model signature has a substantial (as in several degrees, I’d say) correlation length along the plane normal to the preferred axis, and even though the WMAP beams are somewhat asymmetric, they’re not *that* asymmetric.. 😉 Correlated noise is definitely my biggest concern here.


    3) Please note that the paper is “Groeneboom and Eriksen”, not “Eriksen et al.”… :-)

    Finally, I’m really looking forward to see what comes out of the LSS analyses! But do you think the current experiments are deep/wide enough to really get a good handle on this effect? Or do we need to wait for the next generation surveys?


  • Christopher Hirata

    Hans Kristian,

    > Please note that the paper is “Groeneboom and Eriksen”, not “Eriksen et al.”…

    Oops, my mix-up! (literally)

    > But do you think the current experiments are deep/wide enough to really get a good handle on this effect?

    At Fisher matrix level the SDSS photometric samples should be able to detect g* if it’s really 0.15 … at back of the envelope level since there are Fourier modes spanning a wide range of directions you need ~1/g*^2~44 linear modes to measure it. The factors of order unity are unfortunately not so kind: the Fisher matrix has a factor of 2/45 in it (because the variations in the power spectrum as a function of angle are 10^4 modes. BUT: No promises until the systematics tests are all in :)


  • Jon Hanford

    I heartily agree with Sean, that a ‘3.8 sigma’ signal is certainly nothing to crow about & a more robust & unambiguous detection is needed before we look at this theory in more detail. BTW Sean, great article in Scientific American. I hope it made many readers give some serious thought to this ‘spontaneous inflation’ theory amongst the sea of other cosmological theories now in vogue. I think you’re on the right track, anyway.

  • Pingback: Those tiny differences()

  • Pingback: A Special Place in the Universe | Cosmic Variance()

  • Pingback: A Lop-sided Universe? « In the Dark()

  • Pingback: A New Challenge to Einstein? | Cosmic Variance | Discover Magazine()


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .


See More

Collapse bottom bar