WMAP results — cosmology makes sense!

By Sean Carroll | March 16, 2006 3:42 pm

I’ll follow Mark’s suggestion and fill in a bit about the new WMAP results. The WMAP satellite has been measuring temperature anisotropies and polarization signals from the cosmic microwave background, and has finally finished analyzing the data collected in their second and third years of running. (For a brief explanation of what the microwave background is, see the cosmology primer.) I just got back from a nice discussion led by Hiranya Peiris, who is a member of the WMAP team, and I can quickly summarize the major points as I see them.

WMAP spectrum

  • Here is the power spectrum: amount of anisotropy as a function of angular scale (really multipole moment l), with large scales on the left and smaller scales on the right. The major difference between this and the first-year release is that several points that used to not really fit the theoretical curve are now, with more data and better analysis, in excellent agreement with the predictions of the conventional LambdaCDM model. That’s a universe that is spatially flat and made of baryons, cold dark matter, and dark energy.
  • In particular, the octupole moment (l=3) is now in much better agreement than it used to be. The quadrupole moment (l=2), which is the largest scale on which you can make an observation (since a dipole anisotropy is inextricably mixed up with the Doppler effect from our motion through space), is still anomalously low.
  • The best-fit universe has approximately 4% baryons, 22% dark matter, and 74% dark energy, once you combine WMAP with data from other sources. The matter density is a tiny bit low, although including other data from weak lensing surveys brings it up closer to 30% total. All in all, nice consistency with what we already thought.
  • Perhaps the most intriguing result is that the scalar spectral index n is 0.95 +- 0.02. This tells you the amplitude of fluctuations as a function of scale; if n=1, the amplitude is the same on all scales. Slightly less than one means that there is slightly less power on smaller scales. The reason why this is intriguing is that, according to inflation, it’s quite likely that n is not exactly 1. Although we don’t have any strong competitors to inflation as a theory of initial conditions, the successful predictions of inflation have to date been somewhat “vanilla” — a flat universe, a flat perturbation spectrum. This expected deviation from perfect scale-free behavior is exactly what you would expect if inflation were true. The statistical significance isn’t what it could be quite yet, but it’s an encouraging sign.
  • A bonus, as explained to me by Risa: lower power on small scales (as implied by n<1) helps explain some of the problems with galaxies on small scales. If the primordial power is less, you expect fewer satellites and lower concentrations, which is what we actually observe.
  • You need some dark energy to fit the data, unless you think that the Hubble constant is 30 km/sec/Mpc (it’s really 72 +- 4) and the matter density parameter is 1.3 (it’s really 0.3). Yet more proof that dark energy is really there.
  • The dark energy equation-of-state parameter w is a tiny bit greater than -1 with WMAP alone, but almost exactly -1 when other data are included. Still, the error bars are something like 0.1 at one sigma, so there is room for improvement there.
  • One interesting result from the 1st-year data is that reionization — in which hydrogen becomes ionized when the first stars in the universe light up — was early, and the corresponding optical depth was large. It looks like this effect has lessened in the new data, but I’m not really an expert.
  • A lot of work went into understanding the polarization signals, which are dominated by stuff in our galaxy. WMAP detects polarization from the CMB itself, but so far it’s the kind you would expect to see being induced by the perturbations in density. There is another kind of polarization (“B-mode” rather than “E-mode”) which would be induced by gravitational waves produced by inflation. This signal is not yet seen, but it’s not really a suprise; the B-mode polarization is expected to be very small, and a lot of effort is going into designing clever new experiments that may someday detect it. In the meantime, WMAP puts some limits on how big the B-modes can possibly be, which do provide some constraints on inflationary models.

Overall — our picture of the universe is hanging together. In 1998, when supernova studies first found evidence for the dark energy and the LambdaCDM model became the concordance cosmology, Science magazine declared it the “Breakthrough of the Year.” In 2003, when the first-year WMAP results verified that this model was on the right track, it was declared the breakthrough of the year again! Just because we hadn’t made a mistake the first time. I doubt that the third-year results will get this honor yet another time. But it’s nice to know that the overall paradigm is a comfortable fit to the universe we observe.

The reason why verifying a successful model is such a big deal is that the model itself — LambdaCDM with inflationary perturbations — is such an incredible extrapolation from everyday experience into the far reaches of space and time. When we’re talking about inflation, we’re dealing with the first 10-35 seconds in the history of the universe. When we speak about dark matter and dark energy, we’re dealing with substances that are completely outside the very successful Standard Model of particle physics. These are dramatic ideas that need to be tested over and over again, and we’re going to keep looking for chinks in their armor until we’re satisfied beyond any reasonable doubt that we’re on the right track.

The next steps will involve both observations and better theories. Is n really less than 1? Is there any variation of n as a function of scale? Are there non-Gaussian features in the CMB? Is the dark energy varying? Are there tensor perturbations from gravitational waves produced during inflation? What caused inflation, and what are the dark matter and dark energy?

Stay tuned!

More discussion by Steinn Sigurðsson (and here), Phil Plait, Jacques Distler, CosmoCoffee. In the New York Times, Dennis Overbye invokes the name of my previous blog. More pithy quotes at Nature online and Sky & Telescope.

  • Pingback: Not Even Wrong » Blog Archive » Three-year WMAP Data Now Out()

  • ghazal

    Well as people had guessed, it seems the new result for tau is much lower than what was proposed before!

  • http://eskesthai.blogspot.com/2006/03/if-its-not-soccer-ball-what-is-it.html Plato

    Okay, like I did in Mark’s other post. Maybe a trackback would be appropriate, I dunno?

    Dreamer, who sits at desk, looking out to window on universe, while a teacher gives critical evidence amazed.:) Sorry.

    As I was daydreaming….

    As a layman, such visualization, given the evidence of this map, is there not a way of seeing, that brings more perspective to all that data, or should we just stop and accept the picture as a 2d model of a 5d space? :)

    I am thinking of the “polarization points,” to the beginning times(red), and seeing in this way, tunnels going all over the place, in a “boundary” conditon(edge of the unniverse). I might have used the term wrong? To see, the overall dynamics of the universe “itself” doing a complete rotation?

  • http://atdotde.blogspot.com Robert

    There seems to be something wrong with your description of the large scale structure: l=1 is the dipole, 2 the quadrupole and 3 the octopole. 4 does not have a name that I know of and I just read the statement in one of the new papers that there has not much changed in l=2 and 3. So is it really l=4 that moved significantly?

  • Aaron Bergman

    Were there any comments on the reason for the delay?

  • Adam

    ‘It was really hard to get the polarisation data out’ is the gist of it.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Robert, sorry, I had just shifted by one. Fixed now.

  • http://blogs.discovermagazine.com/cosmicvariance/risa/ Risa

    As Adam said, the basic comment on the reason for the delay is that understanding the polarization results is just hard. As Hiranya pointed out today, they were looking for signal at 50 times the sensitivity that the instrument was designed for. I, for one, am glad they took their time instead of risking spontaneous emission of hundreds of papers exploring the unusual models predicted by an incorrect analysis of the data.

  • Kea

    I feel immense relief. A few more things can be relegated to the dust bin, saving me from many nightmares in the future.

  • Dumb Biologist

    One thing I’ve been kind of interested in were the anomalous octupole and quadrupole results, which various people appear to claim is due (pessimistically) to some foreground contamination of the data, or maybe even (optimistically) due to the universe being finite with some interesting topology. I’ve read (to the extent that I can comprehend the papers) criticisms of the analyses claiming some alignment with the eclipitic, implying, I think, that such a posteriori analysis finds trouble precisely where they’re looking for it, and is hence potentially spurious.

    So…I guess the octupole data looks better, and the quadrupole is still anomalous. However, other features of the data argue against a finite universe (if I read things at all correctly).

    Any thoughts on what’s going on?

  • http://www.crookedtimber.org Kieran

    Were there any comments on the reason for the delay?

    They had to clear it with some 22 year old at the White House.

  • Dumb Biologist

    I should have read Dr. Carroll’s post more carefully. I think the statement about “non-Gaussian” fluctuations is meant to address finiteness, among perhaps other things…

  • Haelfix

    A few Ekyprotic models and some of the original textbook inflation models are ruled out with greater confidence.

    Eternal inflation seems to still fit the bill perfectly in naturalness.

    The lack of B modes at the resolution is troubling for a few models as well.

    Either way I love this experiment, its beautiful and deep and im glad to be alive when it happened.

  • http://www.pieterkok.com/index.html PK

    Why is there a relatively large uncertainty around l = 200?

  • http://countiblis.blogspot.com Count Iblis

    Has anyone here already worked out the limits on the DM-baryon cross section implied by the new WMAP data?

  • Scott O

    Very exciting results. But I can’t help but feel a little bit of unease when experimenters reanalyze their data and suddenly find better agreement with their standard model’s predictions. Does anyone else besides me worry that somehow the analyzers are subconsciously biasing the way they do the analysis in order to “improve” the results? You know—tweak a cut here, throw out a dubious data point there? It’s so easy to do. Maybe future experiments should be doing a blind analysis of some sort.

  • Hiranya

    Sean, thanks a lot for this great post.

    #2: Yes and no. The new tau result is actually very close to the best fit model from the likelihood analysis of the first year data (0.1), but for somewhat different reasons. It is indeed smaller than the correlation function analysis form year 1. We have made great improvements in the way we analyze the polarization data, where as Risa notes we are digging deep to extract a tiny signal that the satellite was not actually designed to detect.

    #13: I am not aware of why Ekpyrotic models would be ruled out, per se. And lack of B modes at the level we are able to detect at the moment is not troubling for inflation models. Our sensitivity to B modes is very limited.

    #16: I am not sure why you say we are “suddenly” finding better agreement with the standard model. If you look at our first year papers, we had very good agreement with the standard LCDM model, which has been confirmed by the new analysis. I am not sure why you say we are biasing the analysis to improve the results – if you look at the paper we analyse more than a dozen models of varying degrees of baroqueness, and none of the improves the fit enough to justify adding extra degrees of freedom.

  • Scott O

    Hiranya, if you read again what I wrote, I did NOT say that WMAP is biasing their analyses. I do not know if there is any bias or not. However, although the LambdaCDM model was clearly a good fit in 2003, the fit now seems to be improved in some respects. If I understand things, the l=3 mode has moved closer to agreement. According to Sean’s post, several other points have moved closer to agreement with the LCDM model. Do you have a chi^2 per degree of freedom to report for the overall LCDM fit, which would be one way to quantify how good the agreement is?

    There are numerous reasons why the fit may have improved. Perhaps increased data (better signal/noise) has decreased the experimental uncertainties. But any time an experiment is trying to do precision tests of a model, and the analysts know what result they expect to get, there is a potential for bias to be introduced into an analysis. ALL precision experiments face this problem (and the WMAP team is to be congratulated for turning cosmology into a precision experiment!)

    Let me be also clear that when bias does occur in an analysis, it is almost never deliberate. There are very many unconscious ways in which bias can happen—whenever the analyst has to make choices about how to do the analysis (eg. what data sets to include, what foreground model to subtract, etc.), then potentially one might wind up be influenced by the impact on the final results.

    The high energy physics field has embraced blind analyses in a major way over the last several years due to these issues. Examples of recent analyses which were done blindly or are being done blindly include the BaBar CP violation results, the SNO solar neutrino measurements, and the upcoming MINOS and MiniBooNE results. In each case, the analysis is done as much to protect the analysts from their conscious or unconscious biases as anything else.

    Are there particular steps that WMAP has taken to guard against the introduction of biases? Are there things that could be done in future experiments to prevent analyst bias from impacting the result? I think it would be naive to assume that bias just can’t happen in this kind of work. This kind of precision testing of a favoured model is EXACTLY where blind analyses are often needed.

    Lest I sound too critical, let me finally congratulate WMAP on what is an impressive and high quality set of work! It’s really beautiful, even if I do feel like asking some difficult questions about blind analyses.

  • D R Lunsford

    Surprise! The Neocons have done it again!


  • Hiranya

    Scott #18: It is true we do not carry out particle physics type blind analyses, though if you have suggestions of how to apply such techqniues to CMB analyses I would very interested to hear them, since we should adapt any techniques we can to make future analyses better!

    That said, we take many steps to make sure our results are robust. In terms of models, we do not merely test the predictions of one model, but analyse the data in terms of many models, with many data combinations. There is nothing to inherently bias us towards LCDM in such an approach. For the predictions of the LCDM model itself, we compare the predictions of our best fit LCDM model against numerous astrophysical data sets at many scales and redshifts and check for consistency.

    l=3 moving closer to LCDM has no connection to the LCDM model fit itself – we have reanalysed the low l TT data with an optimal estimator to derive the Cls rather than our previous suboptimal method. You won’t find anyone in the field criticizing this improvement in analysis technique as a bias. You can find the chi^2/l for the best fit LCDM model in Fig 17 of the Hinshaw et al. paper. l=3 has very little weight in the LCDM fit because of the large cosmic variance there, so it is difficult to claim the change has “improved” the LCDM fit.

    We point out in exhaustive detail the limitations of our analysis (especially in the case of polarization analysis) in terms of foregrounds and other systematics uncertainties. Furthermore, all of our data and statistical analyses are (or will be soon) publicly available online so that anyone can download it and test, improve, and extend our analyses. A large number of researchers did this with our first year data, and hopefully even more will use the new data!

  • Tom Renbarger

    Tau, n_s, and sigma-8 were the hot discussion topics in our impromptu lunchtime journal club. Regarding tau, I found it interesting that even as the error bars tightened on it, it actually became it bit less inconsistent with zero than in the 1-year release. It was also offered that the new result significantly diminishes the required output from Pop III stars to explain reionization.

  • http://eskesthai.blogspot.com/2006/03/if-its-not-soccer-ball-what-is-it.html Plato

    A franco-american team of cosmologists [1] led by J.-P. Luminet, of the Laboratoire Univers et Théories (LUTH) at the Paris Observatory, has proposed an explanation for a surprising detail observed in the Cosmic Microwave Background (CMB) recently mapped by the NASA satellite WMAP. According to the team, who published their study in the 9 October 2003 issue of Nature, an intriguing discrepancy in the temperature fluctuations in the afterglow of the big bang can be explained by a very specific global shape of space (a “topology”). The universe could be wrapped around, a little bit like a “soccer ball”, the volume of which would represent only 80% of the observable universe!


  • Hiranya

    #14: That grey band of uncertainty is “cosmic variance”! the fact that we only have one sky to measure. The black error bars on the points are our actually instrumental noise errors. The red points are the red model curve binned the same way as the theory.

  • http://eskesthai.blogspot.com/2006/03/if-its-not-soccer-ball-what-is-it.html Plato

    Layman wondering

    Sound Waves in the CMB

    With thought of the vacuum, it’s hard to consider sound, but if you look at the picture in a different way, it just seems to make sense? If some condition(harmonic osscillation evident in place of nothing) was realize in the state of the vacuum, could such analogies be far reaching, then first assumed in that recogition of WMAP? :)

    With the discovery of sound waves in the CMB, we have entered a new era of precision cosmology in which we can begin to talk with certainty about the origin of structure and the content of matter and energy in the universe.-Wayne Hu

  • Haelfix

    Well lets see if I get this right, any expert feel free to chime in. Cyclic models are tunable so they still fit (the original version I have from lecture notes is ruled out already from WMAP1 but I see you can tune it to fit current lambda CDM bounds, I suspect you might argue the tuning becomes a little more unnatural).

    To distinguish Ekyprotic from inflation you really need to look at the gravitational wave background. Cylcic models tend to have departure from scale invariant gravitational spectra. In principle some gravitational experiments in the next ten years should be able to detect inflationary modes (which could rule out the cyclic universe)

    Also *any* departure from gaussian density perturbations would be fatal I think (even very small departures), at least for the vanilla models without adding extra contrived stuff.

  • Pingback: Zooglea » universo()

  • arnold


    my understanding of the ekpyrotic model is that at present they are not viable alternatives to inflation….not for experimental reasons but even before: they contain a singularity in the equations, which is still not resolved, I believe. And this precludes the possibility of computing the spectral index.

    So, the message is”don’t bother with that, unless somebody proves that equations can work”. For the cyclic case I don’t know.

    (Of course if some unknown physicists had proposed the very same model, nobody would have paid attention to it….)

  • arnold


    if I look at the abstract of the WMAP paper on implications for cosmology I find

    It means that it is a 3 sigma detection of n

  • arnold

    (my last message seems to be incomplete, sorry if I put it again)


    if I look at the abstract of the WMAP paper on implications for cosmology I find

    It means that it is a 3 sigma detection of n_S.

    Is that a good interpretation(or I should worry about systematics?)?
    Is there any prior that can make the detection weaker if relaxed?


  • Hiranya

    Haelfix #25: As far as I understand, the predictions for the scalar spectral index for the Ekpyrotic model are controversial. If the prediction is not robust, its not meaningful say its ruled in or out. If primordial tensors are detected it will indeed rule out these models. And our current constraints on primordial non Gaussianity are too weak to say anything about inflation *or* Ekpyrotic model – they are both consistent.

    Arnold #28: On face value yes, but this error bar is somewhat sensitive to (small) systematic uncertainties, for example in marginalizing over the SZ effect, and the way the beam errors are propagated in the likelihood function. However, the HZ model is indeed disfavoured wrt to the data.

  • Hiranya

    #30: clarification – sorry, HZ model is the Harrison Zeldovich model, the exactly scale invariant spectrum.

  • http://electrogravity.blogspot.com/ Science

    This is hyped it up to get media attention: the CBR from 300,000 years after BB says nothing of the first few seconds, unless you believe their vague claims that the polarisation tells something about the way the early inflation occurred. That might be true, but it is very indirect.

    I do agree with Sean on CV that n = 0.95 may be an important result from this analysis. I’d say it’s the only useful result. But the interpretation of the universe as 4% baryons, 22% dark matter and 74% dark energy is a nice fit to the existing LambdaCDM epicycle theory from 1998. The new results on this are not too different from previous empirical data, but this ‘nice consistency’ is a euphemism for ‘useless’.

    WMAP has produced more accurate spectral data of the fluctuations, but that doesn’t prove the ad hoc cosmological interpretation which was force-fitted to the data in 1998. Of course the new data fits the same ad hoc model. Unless there was a significant error in the earlier data, it would do. Ptolemies universe, once fiddled, continued to model things, with only occasional ‘tweaks’, for centuries. This doesn’t mean you should rejoice.

    Dark matter, dark energy, and the tiny cosmological constant describing the dark energy, remain massive epicycles in current cosmology. The Standard Model has not been extended to include dark matter and energy. It is not hard science, it’s a very indirect interpretion of the data. I’ve got a correct prediction made without a cosmological constant made and published in ’96, years before the ad hoc Lambda CDM model. Lunsford’s unification of EM and GR also dismisses the CC.

  • http://arunsmusings.blogspot.com Arun

    Is there a new and improved satellite in the works?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Arun, yes; the Planck satellite:


  • Scott O

    Hiranya, I’m afraid I don’t have too many detailed ideas about how to do a blind analysis for CMB, since I’m quite hazy on the details of how the analyses are done in detail. The experimenters themselves are the best people to figure this out. I’ve been asking CMB types for years, including WMAP team members, about this, but no one has taken up the challenge, which I find a little disappointing. But I will offer some ideas below.

    I’m afraid that fitting multiple models doesn’t do much to reduce the potential for bias, if the people doing the fit know which model is the favoured one. If you know that LCDM is the favoured model, then you might potentially be biasing analysis choices to improve its fit, even if you are also fitting other models you are less interested in.

    The best I can offer for doing a blind CMB analysis is this:

    1. Ensure that the business of generating the temperature and polarization maps is completely separate from the business of doing cosmological fits. In principle the maps should be completely finished and frozen before anyone even thinks about fitting the data to a cosmological model. Once even a single fit is done, then you cannot go back and change anything in the map generation. This kind of strict segregation ensures that experimenters can’t go back and do something like twiddle with foreground subtraction in order to make an unruly data point come into closer agreement.
    2. A common technique we use in HEP is to include hidden “offsets” in our fits. For example, you could get a colleague who is not involved in the analysis to code up a secret offset that gets added to n in the cosmological fits. Then when you run your fit, the analyst isn’t looking directly at n, but rather the code is outputting n+x, where x is some unknown offset that the analyst knows nothing about. Once the fit is finalized and you’ve written the entire paper except for the conclusion section, you reveal the secret value of x and subtract it from the fitted value of ‘n’ to get the true value. I really recommend doing this—it’s trivial to do, and would completely eliminate any worries that the analysis was subconsciously being tweaked to favour some particular value of n, such as 0.95 or 1.0. In fact, I can’t see any reason why you WOULDN’T include a secret offset in the fit, since it’s so easy to do.

    Some parameters are easier to hide than others by using a secret offset, of course. An experienced CMB hand can probably read off Omega=1 from the first acoustic peak just by looking at the power spectrum without even doing a fit. But there are probably enough other parameters that could be hidden from the analysts in this way to make it worth doing.

    An instructive example of why HEP has gone to using blind analyses can be seen at:

    This page shows the historical trends for measurements of many particle properties. Look, for example, at the middle plot in the second row. See how the measured values jump discontinuously and by amounts far larger than the quoted uncertainties. What you’re seeing there is probably a case where sucessive experiments each were biased towards getting the same result as the previous experiment, until someone comes along and does an experiment with such a different value that a seismic shift happens, and everyone starts biasing to a different value! 😉

  • Elliot


    Googling the Planck Satellite shows a projected launch date of 2007. Is this still about right? Also how long after launch will data collection and analysis occur? Ballpark is fine.



  • Dumb Biologist

    So is it safe to say the l=2 anomaly is curious but relatively unimportant, i.e., it’s a deviation from a model so weakened at that angular scale by uncertainties due to cosmic variance it’s just not very interesting?

  • Elliot

    Biologist, I was wondering the same. You could conceivably draw the curve quite differently at that end of scale the to fit the data.

  • Dumb Biologist

    Well, I’ve absolutely no personal axe to grind on the matter, whatever the answer, and the only reason I pester about it myself relates to what appears to be among the least likely implications of those so-called anomalies, namely that the universe might have some detectable “non-trivial topology”.

  • Pingback: It’s Equal but It’s Different » Blog Archive » Science friday!()

  • Hiranya

    Scott #35: Thanks for the detailed comments! These ideas are very interesting. Some are actually very easy to implement, like the suggestion of offsets. Others, like keeping the parameter analysis and the data analysis completely separate, are already partially true, but in practice there is some overlap. I can tell you that parameter analysis people would *love* for the maps and cls and errors to be frozen before doing parameter runs! However people continue to make improvements right up to the wire. We don’t have the gigantic collaborations and (wo)man power of experimental particle physics. We are a small group of people. However as I said, everything is public, from the timestream to the final likelihood analysis, and others can (and will) try different analysis approaches on our data.

  • http://eskesthai.blogspot.com/2006/03/if-its-not-soccer-ball-what-is-it.html Plato

    You need a three dimensional drumhead to relate sound to the WMAP. More on name.

    In the case of pushing perspective, I like referring to the Chladni plate to help one see further beyond the measures indicated. Changes perrspective about how we might see the universe using WMAP.

    Will it help? I don’t really know.:)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    It’s not safe to say that the l=2 anomaly is unimportant. We just don’t know — maybe it’s just an accident, maybe it’s an indication of something super-significant. Until we have some notion of what that thing might be, and a way to independently verify it, “we don’t know” is the best we can do for the moment.

  • Dumb Biologist

    Whew! I was starting to think I’d asked an offensive question or something! Thanks for the response, Dr. Carroll.

  • Savya

    Thanks very much, Sean!

    Hiranya – sorry to harp on #14 – as far as I know, cosmic variance ~ 1/sqrt(l(l+1)) – if this is true, then the width of the grey area should be less at l=200 than it is at l=100, but it actually seems to be thicker! Is this an artifact of the way the plots are made, or am I missing something? Again, sorry: I am a newbie!


  • http://arunsmusings.blogspot.com Arun

    What kind of limits (if any) on baryon- dark matter interaction are required for big bang nucleosynthesis to come out right?

  • http://countiblis.blogspot.com Count Iblis

    Plato, see here:

    Constraining Strong Baryon-Dark Matter Interactions with Primordial Nucleosynthesis and Cosmic Rays

    Self-interacting dark matter (SIDM) was introduced by Spergel & Steinhardt to address possible discrepancies between collisionless dark matter simulations and observations on scales of less than 1 Mpc. We examine the case in which dark matter particles not only have strong self-interactions but also have strong interactions with baryons. The presence of such interactions will have direct implications for nuclear and particle astrophysics. Among these are a change in the predicted abundances from big bang nucleosynthesis (BBN) and the flux of gamma-rays produced by the decay of neutral pions which originate in collisions between dark matter and Galactic cosmic rays (CR). From these effects we constrain the strength of the baryon–dark matter interactions through the ratio of baryon – dark matter interaction cross section to dark matter mass, $s$. We find that BBN places a weak upper limit to this ratio $

  • http://countiblis.blogspot.com Count Iblis

    Sorry I meant Arun, not Plato :)

  • BK

    Can I say something about general relativity, spacetime and geometry here? (I left this same comment under “general relativity as a tool” as I attempt to make my way around so please bear with me, sorta new.)
    Has anyone noticed that general relativity does not jive with vortex dynamics, even though the sun and planets follow the laws of vortex dynamics? We all know that the planets are orbiting the sun in a counterclockwise fashion with the sun as a foci. Consider any two of the planets as Mass A and Mass B (just two to simplify this but any number of masses will do.) Vortex dynamics says that the two masses orbit around a foci because they are caught in each other’s flow fields, and the foci is the RESULT of them being in each other’s flow fields. If you were to take away the two rotating masses then the foci between them would also disappear and in fact, the foci would not exist in the first place without the two orbiting masses that create it.
    General relativity says the sun bends spacetime and gravity is the result, but according to the actual engineering law covering the motion of the sun and the planets, the PLANETS (masses A and B) create gravity because they are caught in each other’s flow fields and the foci between them, the sun, is the RESULT of this mutual attraction between the planets. And since vortex dynamics are LAW and general relativity is THEORY this should be taken as a serious flaw in how we view the solar system.
    James Vanyo’s book ROTATING FLUIDS IN ENGINEERING AND SCIENCE has a great chapter on vortex dynamics.
    BK (reached at a cool little lady’s email joanbayles@hotmail.com)
    I could go on with how vortex dynamics correlates with superposition and entanglement but I’ll wait to see if anyone’s interested, lest I overstay my welcome.

  • David Spergel

    In response to Scott’s comments about the power spectrum and foreground removal (#35).

    We do freeze the power spectrum before we do the model fits. We basically
    spent 2 years modifying the pipeline so that we could treat the noise properly
    and pass various null tests and the self-consistency tests. We didn’t run
    any serious cosmological models until about 3 months ago.

    The foreground model for the temperature was indeed
    fixed before the power spectrum was computed. The new foreground
    model has only 2 free parameters— the big change was switching
    from using the Haslam 408 MHz map to model the foreground
    to using an internal combination of WMAP data (22 GHz-30 GHz).
    When we were testing the foreground models, we used
    the difference between 40 and 60 GHz power spectra as a test of the foreground model. Since this difference contains no CMB signal,
    this fitting scheme is unbiased

    The shift in l =3 and l=5 is due to switching from using the
    MASTER algorithm to using Maximum Likelihood. George Efsthathiou
    wrote a nice paper discussing this issue and we were convinced to
    use ML analysis on the low l’s.

    The improvement in chisq’ed was due to several effects:

    – better beams– these were fit to Jupiter and had no free parameters
    to “tweak”

    – using smaller pixels in the map making

    – an improved foreground model

    I should note that we also have done blind tests on model fitting.

    When we started this project, I never expected the data to fit the model
    (I still dislike the cosmological constant), but we have to present what we find.

  • Hiranya

    #45: Cosmic variance goes as sqrt(2/(2l+1)) * C_l^{theory}. This means that the CV error at l=200 is bigger than at l=100 because the C_200/C_100 ratio wins over the factor of ~2 increase in the number of measurable modes.

  • Pingback: Charm &c. » Blog Archive » WMAP Three Year Results()

  • http://arunsmusings.blogspot.com Arun

    So does the Standard Model – electroweak, QCD – apply to only 4% of the stuff in the universe?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    That’s right — only the 4% of the universe that is “ordinary matter” is described by the Standard Model.

  • Savya

    Thanks, Hiranya.

  • Scott O

    Thanks for the comments, David (#50). I’m looking forward to seeing Mark Halpern’s astro seminar tomorrow at UBC, in order to learn more.

  • Pingback: Slacker Astronomy Show Notes » WMAP’s Anistrophy Trophy (Show #47)()

  • Spaceman

    From what I can tell based on critical reading skills and a trust of the WMAP team, the superbly accurate 3-year results are the product of an exhaustive and painstakingly detailed search for systematic errors and foreground contamination. A number of new techniques were employed to see if the data is of high enough quality to be used for a cosmological analysis. So, the combination of longer integration time and a more thorough analysis assures us the new results are giving us a solid picture/understanding of cosmic evolution. I certainly don’t think cosmology is solved, as there are still mysteries and cosmophenomena that need to be explained, but at least we now have a rough outline of cosmic evolution. I have a feeling that the standard model of cosmology is basically correct even though it may take decades before we fill in all of the details. Think about it like this: we knew the size and shape of the earth before we knew what it was made out of and had it all mapped; similarly, we now almost surely know the size, expansion rate, and shape (i.e, flatness) of the universe even though we do not yet know what is the dark energy and the dark matter. Humanity has little to be proud of these days on Earth, as neo-liberalism allows billionaire tourists to fly into space while billions remain without the basics. Nevertheless, we should be proud of the fact that we’ve come as far as we have in recent years in terms of being able to read the “universe story” in the sky.

  • Spaceman

    Like Dumb biologist, I am also interested in the age-old question: is the universe finite or infinite? In my opinion, this is one of the most important questions ever asked. I know this question can only be answered definitively if the universe is smaller than the horizon. Unfortunately, I have a feeling that those in favor of the small universe idea will never accept any data which concludes that non-trivial topology, if it exists, must be on a super-horizon scale.

    I have several questions related to the finite or infinite issue which I am hoping a cosmologist could help answer.

    1). The low CMB quadrupole is in sharp contradiction with the infinite universe prediction for the quadrupole. Wouldn’t any infinite universe model which tries to accommodate this observation be considered an unnatural stretch?

    2). Luminet et al (2004) and Aurich et al (2005) and others have written highly critical papers regarding the topology conclusion reached by Spergel et al (2004). A lot of this criticism is two-pronged: they basically say that (i) the 1st year sky-maps have too much noise in them for Spergel et al to have reached the conclusion they did, and (ii), the methodology itself is in some way flawed. Who is correct? Do the WMAP 3-year sky-maps have a high enough signal-to-noise ratio for one to look for a topological signiture in them, or, will it take another satellite (i.e. the Planck Surveyor) to resolve this issue?

    3). Do Spergel et al have plans to write a paper to counter the recent criticisms that have been leveled against their “circles in the sky” analysis?

  • Pingback: Everything I know about the universe I did not learn from newspaper headlines | Cosmic Variance()

  • Pingback: The Future of Theoretical Cosmology | Cosmic Variance()

  • Pingback: From Quantum to Cosmos-II | Cosmic Variance()


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .


See More

Collapse bottom bar