Dark Energy Has Long Been Dark-Energy-Like

By Sean Carroll | November 16, 2006 2:20 am

Thursday (“today,” for most of you) at 1:00 p.m. Eastern, there will be a NASA Media Teleconference to discuss some new observations relevant to the behavior of dark energy at high redshifts (z > 1). Participants will be actual astronomers Adam Riess and Lou Strolger, as well as theorist poseurs Mario Livio and myself. If the press release is to be believed, the whole thing will be available in live audio stream, and some pictures and descriptions will be made public once the telecon starts.

I’m not supposed to give away what’s going on, and might not have a chance to do an immediate post, but at some point I’ll update this post to explain it. If you read the press release, it says the point is “to announce the discovery that dark energy has been an ever-present constituent of space for most of the universe’s history.” Which means that the dark energy was acting dark-energy-like (a negative equation of state, or very slow evolution of the energy density) even back when the universe was matter-dominated.

Update: The short version is that Adam Riess and collaborators have used Hubble Space Telescope observations to discover 21 new supernovae, 13 of which are spectroscopically confirmed as Type Ia (the standardizable-candle kind) with redshifts z > 1. Using these, they place new constraints on the evolution of the dark energy density, in particular on the behavior of dark energy during the epoch when the universe was matter-dominated. The result is that the dark energy component seems to have been negative-pressure even back then; more specifically, w(z > 1) = -0.8+0.6-1.0, and w(z > 1) < 0 at 98% confidence.

supernovae

Longer version: Dark energy, which is apparently about 70% of the energy of the universe (with about 25% dark matter and 5% ordinary matter), is characterized by two features — it’s distributed smoothly throughout space, and maintains nearly-constant density as the universe expands. This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,” since the law of energy conservation relates the rate of change of the energy density to (ρ + p), where ρ is the energy density and p is the pressure. Thus, if p = -ρ, the density is strictly constant; that’s vacuum energy, or the cosmological constant. But it could evolve just a little bit, and we wouldn’t have noticed yet. So we invent an “equation-of-state parameter” w = p/ρ. Then w = -1 implies that the dark energy density is constant; w > -1 implies that the density is decreasing, while w < -1 means that it’s increasing.

In the recent universe, supernova observations convince us that w = -1+0.1-0.1; so the density is close to constant. But there are puzzles in the dark-energy game; why is the vacuum energy so small, and why are the densities of matter and dark energy comparable, even though matter evolves noticeably while dark energy is close to constant? So it’s certainly conceivable that the behavior of the dark energy was different in the past — in particular, that the density of what we now know as dark energy used to behave similarly to that of matter, fading away as the universe expanded, and only recently switched over to an appreciably negative value of w.

These new observations speak against that possibility. They include measurements of supernovae at high redshifts, back when the density of matter was higher than that of dark energy. They then constrain the value of w as it was back then, at redshifts greater than one (when the universe was less than half its current size). And the answer is … the dark energy was still dark-energy-like! That is, it had a negative pressure, and its energy density wasn’t evolving very much. It was in the process of catching up to the matter density, not “tracking” it in some sneaky way.

Of course, to get such a result requires some assumptions. Riess et al. consider three different “priors” — assumed behaviors for the dark energy. The “weak” prior makes no assumptions at all about what the dark energy was doing at redshifts greater than 1.8, and draws correspondingly weak conclusions. The “strong” prior uses data from the microwave background, along with the assumption (which is really not that strong) that the dark energy wasn’t actually dominating at those very high redshifts. That’s the prior under which the above results were obtained. The “strongest” prior imagines that we can extrapolate the behavior of the equation-of-state parameter linearly back in time — that’s a very strong prior indeed, and probably not realistic.

So everything is consistent with a perfectly constant vacuum energy. No big surprise, right? But everything about dark energy is a surprise, and we need to constantly be questioning all of our assumptions. The coincidence scandal is a real puzzle, and the idea that dark energy used to behave differently and has changed its nature recently is a perfectly reasonable one. We don’t yet know what the dark energy is or why it has the density it does, but every new piece of information nudges us a bit further down the road to really understanding it.

Update: The Riess et al. paper is now available as astro-ph/0611572. The link to the data is broken, but I think it means to go here.

CATEGORIZED UNDER: Science
  • http://wishsubmission.blogspot.com Manas Shaikh

    Will be waiting for the update. :)

  • Piotr Florek

    Thanks Sean, that explains a lot ;)

    I’ll try to be online when the event starts (at 7 p.m. here) and listen to you guys.

  • Joseph Smidt

    Thanks for the heads up. I will try to tune in to this announcement. I am continually more and more glad I am going into cosmology as a grad student next year. The good reports just keep coming. :)

  • dark energy

    Aaaah, this is just a public relations stunt by NASA and by Sean.

  • http://backreaction.blogspot.com/ B

    even back when the universe was matter-dominated?

  • http://brahms.phy.vanderbilt.edu/~rknop Rob Knop

    Public relations stunts in science are important… it’s what keeps funding agenscies paying attention, it makes funding agencies happy to do it, and it attracts the attention of administrators far more than actual scientific papers. All of which is important.

    Anyway, it’s possible there’s some cool new result here :)

    I, alas, will be driving my wife to an appointment during the pres srelease, but I’m fully capable of reading about it later (and reading any astro-ph papers that are out on it — are any yet?) Will the audio stream be archived, or will it be one of those annoying things that you can “only” get live?

    -Rob

  • Trip Russell

    I am looking forward to watching the press conference, today

  • BG

    The astro-ph paper can’t appear until after the NASA announcement, but as far as I know it’s otherwise ready to go. And it seems NASA does generally archive and make available old audio streams.

  • http://eskesthai.blogspot.com/2004/12/curvature-parameters.html Plato

    So having effectively ruled out the results of the Friedman equations(?), we are back with Einstein’s cosmological constant, and “anti- gravity?”

    Maybe you can explain what “anti-gravity” means as well in our understanding of the universe?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I admit, it’s just a PR stunt by NASA and, more importantly, by me. They didn’t want to make a big deal out of it — they were all “If ordinary people on the street know what we’re doing, that could ruin everything!” But, sensing a valuable opportunity to get my face on the radio, I insisted that they stage a press carnival, or I would release those grainy films I took of the fake Moon landing.

  • swiftfeet

    Question:

    Is Dark Matter in places where ordinary particles are not? And how could you know that? And say you don’t know the answer, then how can you estimate the amount of DM present in the universe?
    Some notation: DM(dark matter), DE(dark energy),RS(regular stuff).
    If DM and RS happen to be seen together, then they know of each others presence. So there is some degree of freedom, which allows of some level of interaction between these two forms of matter. Is that true?
    Then if DM and RS tend not to do anything spectacular like expanding or lowering their temps due to expansion, etc. then we can believe that they dont even know about the expansion. Which means that they do not communicate to any level with DE. But if DE is a fundamental property of spacetime, at the singularity of the big bang, DE and DM and RS(or their then forms) must have been aware of each others existence since more or less its the same. So what sense did DM and RS lose? What symmetry broke irreversibly?
    Is it a spacetime function or a tensor component that is monotonously decreasing or increasing? Is DE locally unchanging at all, and thus unaware of the expansion it is causing from out point of observation? Are the DM and RS clusters just small and insignificant perturbations in the local DE universe? Or maybe, are these insignificant perturbations, what drives the DE changing?
    But if these perturbations indeed drive a change in DE then it is just another form of interaction between it and the DM and RS. So…these then are coupled…and you expect to have been driving each others dynamics…and if that should be the case, then DE has been changing and you would expect to be able to detect that…
    So whats the catch here? DE was positive, 0, negative?

  • Chaz

    Joseph, where are you starting school and where are you coming from?

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    …or I would release those grainy films I took of the fake Moon landing.

    With your cellphone, right?

    -Rob

  • Science for the People

    If science were just for scientists, it would be a rather selfish game.
    Huzzah to Sean for his publicity stunts and blogging!

  • Alex

    I’m not sure how interesting these lastest observations of supernovae are. As far as I can tell noone would whole heartedly believe in dark energy from supernovae observations alone. There are loads of major issues in determining the expansion rate of the universe (therefore seeing the dark energy influence) from supernovae. First (and most importantly) it is assumed that they all have the same intrinsic brightness, which in fact they don’t, but you can perform a fudge called the stretch factor to make this so. However we don’t understand fully why their intrinsic brightnesses should be the same so these observations are based on empiral relations. This means we have no idea whether these relations evolve with redshift and given this data is from supernovae at very high redshift can we trust it?

    Another interesting issue which becomes important when determining with precision the equation of state w from high redshift supernovae is the need to correct the redshift of the supernovae for its peculiar velocity. The galaxy which the supernova is in will also be moving due to gravitational effects from other galaxies in addition to the expansion of the universe, the redshift measured is a sum of these two things. This effect is really only important for low redshift supernovae who’s expansion velocities are smaller. However the high redshift supernovae rely on accurate calibration from the ones at low redshift. One could imagine that a bunch of low redshift supernovae over a large area of sky could be moving in the same direction towards a large more distant cluster in a ‘bulk flow’. This would create a correlated error in the supernovae observations which could bias w either too high or too low and you could imagine also it has an effect on measuring w(z). As far as I am aware this effect is not accounted for.

    I’m also worried about the fact that only 13 of the 21 have been spectroscopically confirmed as Type Ia (the ones to use). Also does this mean they don’t have spectroscopic redshifts (ie accurate) for these either?

    Overall I think we have to wait for results from other probes such as weak lensing or the baryon acoustic oscillations before we really have an idea whether or not dark energy is evolving with time. As for the nature of dark energy itself I hope it’s modified gravity, not just for the sake of Sean’s research, but I think it’s psychologically nicer than weird fields and vacuum energy.

    Long post, probably made no sense, nevermind.

  • BG

    Alex,

    I agree with you that SNe alone don’t tell the whole story. But two major points of the research are:

    1. Riess now has spectra of several high redshift SNe and has shown that they look the same as low redshift SNe; in other words there’s no obvious sign of evolution.

    2. With the high redshift SNe you can attempt to divide w(z) into “bins” and get constraints in each bin; with this new data you find that w 1 to pretty good confidence.

    Peculiar velocities are included in the error budget for all SNe, but you’re right that bulk motions could mess up the calibration. Bulk motions would show up as an anisotropy in inferred distance-redshift relation for those SNe, however, and to my knowledge this hasn’t been seen at any significant level.

    I don’t know about the spectroscopic confirmation stuff off the top of my head. I don’t think unconfirmed Ia were included in the “Gold” sample used for the cosmological analysis.

  • BG

    Oops, in point two above it’s supposed to be “w < 0 for z > 1 to pretty good confidence”. Forgot about HTML not liking < and >.

  • Chaz

    Alex,

    Evidence of cosmic acceleration doesn’t kick in until z>0.1, so a bias in the low redshift SNe used for calibration would have little effect on measuring w(z). Such a bias could affect the measurement of the expansion rate H_0, and people do think about this.

    I’m not an expert on type Ia SNe, but from attending various colloquia, my understanding is that there is no evidence for any major evolution. If there is some slight evolution going on, the next generation of supernova surveys WILL have to worry about it. The error bars Sean mentions are relatively huge, so there’s no need to worry right now.

  • Chaz

    I disagree with what I just said about low z SNe – they could indeed bias w(z).

  • dark energy

    Aaaaah….it’s all hopeless. What is “energy” if not simply a bookkeeping device we all invented? Some quantity associated with time translation invariance which is conserved.

    Ooooooohhh…we’ll never find the answer. It’s all hopeless.

    :*****(

    I knew it was a PR stunt. NASA’s just in it for the money and the fame and to meet girls (and boys too).

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Alex — rather than proprer motion, gravitational lensing is more likely to be a systematic to worry about for really-high-z supernovae.

    For proper motion to be a significnat systematic at those sorts of redshifts, the galaxies would have to be moving relativistically, or at the very least implausibly fast.

    There *are* some ideas as to why the intrinsic brightness should always be the same. The core reason is that the Chandrasaekar mass is the same for all supernovae, but there are a lot of supernova theorists who’ve gone farther with that.

    We do have reasons to believe that the stretch/magnitude relationship is the same at high-z and low-z. Last time I worked on this in detail (the Knop 2003 paper), things at z~0.7 and z~0 looked very consistent. It’s more plausible that the demographics of stretches will change than that the stretch/magnitude relationship would chang.e

  • Moshe

    I have to report the emergence of a certain temptation to send in annonymous hostile comments, just to enjoy Sean’s response. The phrase “valuable opportunity to get my face on the radio” is pretty memorable…

    (No worries, I’ll resist the temptation)

  • http://eskesthai.blogspot.com/2006/11/what-is-dark-matterenergy.html Plato

    A Three Ring Circus

    Anti-gravity…hmmmm…..speeding up? Why?

    Then the universe is “fluctuating or oscillating,” between the “curvature parameters?”

    Speculating about what Cosmologists are doing and thought, hey, from the layman, might as well throw the above in as to what one might think the universe is doing from this analysis?

    I didn’t hear any drum roll or, “ta da,” before the top hat came off. :)

  • Arun M

    It was nice to read some updates on dark energy and what the cosmologists are finding out about it. Thank you Sean. So is the idea that DE is just the cosmological constant now stronger ? Also I was wondering what happened to the old idea of Bruno Zumino , that tried to explain a vanishing/almost vanishing cosmological constant from SUSY ? (Since the ground state must be 0 when SUSY is a strict symmetry at some scale.)

  • Joseph Smidt

    Chaz,
    I am coming from BYU. I don’t know where I am going yet. I apply in a month. I am just crossing my fingers I get in somewhere with a good thesis advisor. My time spent at Los Alamos taught me there is nothing more important than a good advisor and research group. I am applying to schools with strong theoretical Cosmology programs.

  • Ali Soleimani

    Sean, you quoted

    w(z > 1) = -0.8 +0.6 -1.0 and
    w(z > 1) < 0 at 98% confidence.

    Are those 2-sigma errors, then? Otherwise I don’t see how the two statements could be consistent.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Ali–

    The error bars are probably seriously non-Gaussian. In fact, I’d be surprised if they weren’t, given past experience with this.

    As such, it’s entirely possible that the +0.6 on the w(z>1)=-0.8 is a 1-sigma (68%) thing, and that that is consistent with w(z>1)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Ali, I was just quoting from the paper (which I don’t believe is yet publicly available, although it’s been accepted by ApJ). I’m guessing that either those are 2-sigma errors, or they’re not Gaussian.

    Arun, the results provide some more support for a cosmological constant, but there’s still room to play. SUSY is no obvious help, since it must be broken in the real world, leading to a vacuum energy at least 60 orders of magnitude greater than what we observe (absent miraculous cancellations).

  • http://arunsmusings.blogspot.com Arun

    In my 1998 James Binney and Michael Merrifield, “Galactic Astronomy” there is this:

    “A wide range of techniques have been applied to measuring the distance to the Virgo Cluster. In their extensive review of the subject, Jacoby et al. (1992) showed that the three methods with the smallest uncertainties (surface-brightness fluctuations, planetary-nebula luminosity function, and the Tully-Fisher relation) all provided consistent distance estimates of ~ 16 ± 1 Mpc. The one seriously conflicting measurement comes from the analysis of type Ia supernovae….the Virgo Cluster would have to lie at a distance of 23 ± 2 Mpc…As we have seen in § 7.3.3, some doubt has now been cast on the role of type 1a supernovae as standard candles”.

    Probably months after this particular edition was printed, Perlmutter and others announced that the universe was expanding at an accelerating rate, using Type 1a supernovae as standard candles!

    Clearly, the textbook was way behind the research! There must be quite a story in how Type 1a supernovae were successfully calibrated to be standard candles, and I hope one of the experts here will some time go into the details.

    For now, I’d settle for an answer to the question – what was the cause of the discrepancy in distance to the Virgo Cluster and how was it resolved?

    Thanks in advance!

  • Pingback: » Links for 17-11-2006 » Velcro City Tourist Board » Blog Archive()

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Arun —

    Of course, textbooks are written a year or so before they’re published, so it’s not as close as all that.

    However, it’s also entirely possible that SNe Ia could have a “wrong” distance to Virgo, while they were still standard candles good enough to measure the accelerating Universe. I’d have to go back and think a lot to figure out what the real story with SN distances to Virgo were in 1998.

    Here’s the key, though : using SNe to measure the distance to Virgo requires knowing the *absolute* luminosity of a supernova. The discovery of the accelerating Univesre did *not* require this, it only required that they be a standard candle. As long as they were always the same, we could measure Omega_M and Omega_L without actually knowing the true luminosity of an SN Ia! What we did (effectively) was compare the slope of the low-redshift and high-redshift supernovae.

    If you want to use supernovae to measure H_0, you do need to know the absolute luminosity of a supernova, but we were able to measure the acceleration even without really knowing the current expansion rate. Indeed, in the fits that at least the SCP did (which is where I was), we had a parameter “script-M” which contained the joint effect of the supernova absolute magnitude and H0. We didn’t try to separate them out. Mathematically, it turns out that a “brigther absolute supernova” would cancel a “higher H0″ perfectly, and vice versa.

    As such… we’re very sure that most SNe Ia make pretty good standard candles (good to 20% or so, good to 10% or so if you calibrate out a light curve decline rate), even if we don’t know the SN absolute luminosity that well. (Which nowadays we probably do, becuase even if we don’t have a good absolute measurement of it, we have a few good measurements of H0.)

    -Rob

  • BG

    The likelihood is indeed non-Gaussian for the high redshift bin, which can lead to some funny looking statements if you’re not used to such things. There are several figures of likelihood histograms in the paper, and for non-Gaussian stuff I think it’s really best to look at the distribution to get an idea of what’s going on.

    For the quoted “-0.8+0.6-1.0″ number those are “one-sigma” intervals defined by where the likelihood falls to 0.6 of it’s peak value (which is where one sigma is for a Gaussian). You get fairly similar numbers if you define things by looking at the FWHM or 68% contours or various other things you might think of. That w &lt 0 at 98% comes from integrating the full likelihood to get the CDF. It’s so close to the “one-sigma” number because the likelihood falls off very sharply as w increases.

    There’s a table in the paper that also reports the 95% intervals, so there are a lot of statistics to contemplate. This mostly only matters for that high redshift bin, though, the others are much closer to Gaussian.

  • George Ellis

    I know it’s churlish to bring it up, but still here goes: there is an alternative explanation of the data which involves no dark energy at all. It is simply that we are near the centre of a major inhomogeneity in the universe, and what the supernova data are measuring is the amount of spatial inhomogeneity of the universe. One can thereby fit the supernova data exactly with no cosmological constant or dark energy at all (that’s a theorem). Now this proposal is very unpopular for philosophical reasons – we would be near the centre of the universe, or at least of a large inhomogeneity in the universe in this case; but it is surprisingly difficult to disprove it by any astronomical observations. It remains a possibile explanation of the data.

    The reason for mentioning this is that the existence of a cosmological constant of this small magnitude has been characterised by many as one of the greatest crises facing present day theoretical physics, and has led to extravagances such as anthropic explanations in the context of multiverse proposals that cannot be observationally tested in any ordinary sense of the concept of `observational test’. Hence one should at least look at alternatives that avoid this problem, even if that involves being a little bit more open minded about the geometry of the universe than is conventional.

  • http://astromalte.blogspot.com/ Malte

    Manual trackback from us at Populär astronomi: Den mörka energin, som astronomer tror ligger bakom universums acceleration, verkar ha varit sig lik sedan universum var ungt. Nya observationer pÃ¥ avlägsna supernovor med Hubbleteleskopet…[…]

  • http://lablemminglounge.blogspot.com/ Lab Lemming

    I have a fairly ignorant question from a non-astronomer:

    I read (probably on wikipedia) that type Ia supernovae are casued by exploding white dwarves that approach the Chandrasaekar (sp?) limit by gaining mass.

    I recall from planetary geology that white dwarves form when sun-sized stars burn out, a process that takes about 9 Ga.

    I have read in numerous places that the universe is only about 13.5 Ga in age.

    If 9-10 Ga galaxies contain type 1a supernovae, then one of the above “facts” must be wrong, since they predict that the oldest white dwarfs should not be older than about 4.5 Ga. So where have I screwed up?

  • http://arunsmusings.blogspot.com Arun

    Rob,

    My understanding of the chain of reasoning is as follows:

    Cepheids were used to calibrate nearby Type 1a supernovas’ absolute magnitudes, and then this calibration puts the Virgo Cluster too far away compared to all other measures.

    This means at least one of the following:
    1. The Cepheid distance scale has a problem.
    2. The supernovae calibrated using Cepheids were unusual
    3. The nearby supernovae and those in the Virgo Cluster have different absolute magnitudes, which would cast doubt on using them as a standard candle.

    Presumably something like the following is true – and here I’m really guessing – that by the time scale of the light curve and/or the spectrum one can bin type1a supernova into classes; the supernovae in each class have essentially a unique absolute magnitude.

  • Thomas Dent

    Calling a press conference for something that has been determined with 98% confidence is quite the publicity stunt. That’s barely 2.5 sigma in Gaussian-speak. If particle physicists called the media in every time something was 2.5 sigma out … well, that was the problem with the Higgs pseudo-signal back in 2000, I seem to recall.

    What’s the deal with the spectroscopic determinations, or lack of them?

    PS Was that *the* George Ellis?

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Lab Lemming — according to our best understanding of stellar evolution, white dwarfs are left behind by stars 8 solar masses and lighter. A star just under 8 solar masses lives (if memory serves) less than 100 million years.

    As such, it’s possible to make a white dwarf very quickly after you form a bunch of stars — at least on cosmological time scales.

    I have to go back and remember where I read this, but I think there have also been some studies that suggest that galaxies 1 Gyr after a starburt (starburst galaxies being those forming lots of stars right now) show chemical signatures of enhanced SNe Ia. If that’s right, that would suggest that indeed Type Ia supernovae are *more* common from the white dwarfs left behind by the rarer, more massive stars — and thus would potentially have a short average time to go from gas cloud to Chandrasaekar-mass star.

    Arun — again, I don’t have knowledge of how many or what supernovae were found in the Virgo cluster at my fingertips, so I’d have to dig a bit to figure that out. Howevre, your “bins” thing is almost right. Really, the light curve decay rate is a parameter that smoothly varies with supernova peak luminosity. Even without that correction, though, most Ia supernovae are consistent to 20%. There are a handful of outliers. This isn’t a showkiller, though. As long as you have a lot of them, it’s easy to identify the outliers. And, indeed, that’s the case with the supernovae used for cosmology. If you take the “low redshift” (z<.1 or some such) sets that come from Hamuy and Riess papers of 1998 and before, the dispersion of those supernovae around a Hubble expansion is small. This is the empirical evidence that supernovae are consistent. Add to that the fact that the high-redshift and low-redshift supernovae have consistent spectra, and we’re pretty sure we know what we’re doing.

    Thoms Dent — spectroscopic determinations of supernovae at really high redshift is *hard*. It takes a lot of Hubble Space Telescope time, and even then the signals are often marginal. That’s probably the main reason some of them are lacking. I haven’t read the paper yet, but it may be that there wasn’t time to attempt confirmations for all of the supernovae. It may also be that they attempted it, but the signal was too crappy to see anything convincing.

    -Rob

  • http://www.pierre-bon.com/Intro.htm DonPanic

    Does dark energy mean that Vacuum Abhors A Nature ?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    George, you’re right that we could do away with dark energy by imagining that we lived at the center of a spherical inhomogeneity. (At least as far as supernovae and other kinematical tests go; I’m less sure about whether you could simultaneously fit structure formation.) But:

    (1) That would actually be more surprising than a cosmological constant. Anthropic-type explanations would seem even more tempting in such circumstances.

    (2) There’s no reason why such a configuration would give us something extremely close to w = -1, as we seem to be observing. It would be allowed, but so would any other value.

    (3) The “biggest crisis” is really the fact that the vacuum energy is small, and zero would still count. An inhomogeneity wouldn’t solve that problem.

  • http://eskesthai.blogspot.com/2006/11/three-ring-circus-dark-energy.html Plato

    Sean:This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,”

    We needed explanation for the “why of it” and I was just wondering about the cross over point in LHC? More on name.

  • George Ellis

    Sean says,

    “you’re right that we could do away with dark energy by imagining that we lived at the center of a spherical inhomogeneity. (At least as far as supernovae and other kinematical tests go; I’m less sure about whether you could simultaneously fit structure formation.)”

    right. Such other tests still need careful consideration.

    ” But: (1) That would actually be more surprising than a cosmological constant. Anthropic-type explanations would seem even more tempting in such circumstances.”

    What is surprising or not is a matter of opinion and philosophical stance. There is no physical experiment to say this is more suprising than that; and if there was. it would still not *prove* anything about the way the universe actually is – sometimes reality is indeed very surprising. So this is an example of how much of modern cosmology, despite the appearances, is philosophically rather than data driven. There is nothing wrong with that, but it should be acknowledged.

    “(2) There’s no reason why such a configuration would give us something extremely close to w = -1, as we seem to be observing. It would be allowed, but so would any other value.”

    And there is no reason why there should be a cosmological constant or quintessence or whatever with the observed values.

    “(3) The “biggest crisis” is really the fact that the vacuum energy is small, and zero would still count. An inhomogeneity wouldn’t solve that problem. ”

    yes but there used to be the assumpotin that something (supersymmetry?) would cause cancellations leading to an exact zero, while a value of 10^{-80} or so would requre huge fine tuning. The zero would in some sense be more natural. Of course again a philosophical argument.

    What is actually happening in the way things are done at present is that the assumption of spatial homegenity is put in by hand, and then used to derive an equation of state for “dark energy” that then follows from the astronomical data. A geometrical assumption is used to determine the physics that would lead to that desired geometrical result. So the question is, What independent test could there be of that supposed physics? Will it explain anything else other than the one item it was invented to explain?

    Now you could claim of course that inflation would prevent any such inhomogeneities occurring (indeed I am surprised this was not on your list!). But inflation is a flexible enough subject that it can probably be varied enough to include such inhomogeneities. You can probably run the equations backwards to get a potential that will give the required result.

  • absolutely

    I like Ned Wright’s comment on his Cosmology Tutorial website:

    News of the Universe
    NASA fails to produce new data on dark energy
    16 Nov 06 – NASA held a press telecon today about dark energy, but neither the press release nor the images accompanying it contained any useful information. There was no paper about the data on the preprint server either.

  • http://lablemminglounge.blogspot.com/ Lab Lemming

    So and 8 s.m. star ends up as a 1.4 s.m. white dwarf? I guess I was incorrectly assuming mass conservation throughout the star’s lifetime. Thanks for clearing that up.

  • Joseph Smidt

    Lab Lemming- There is a conservation of mass/energy. A star becoming a white dwarf will throw off its outer layers to form a planetary nebula, leaving behind a core. (Mass loss) It will then gradually cool down and radiate away energy until it can no longer prevent gravitational collapse. It then becomes supported only by electron degeneracy pressure. After it is all said and done it will end up as an object of 1.4 s.m. So conservation of mass energy is not violated. Here is a cool picture of a planetary nebulae: http://antwrp.gsfc.nasa.gov/apod/ap061112.html

  • http://www.astro.ucla.edu/~wright/cosmolog.htm Ned Wright

    This is what I put on “The News of the Universe” part of my cosmology tutorial:

    ————–

    NASA fails to produce new data on dark energy

    16 Nov 06 – NASA held a press telecon today about dark energy, but neither the press release nor the images accompanying it contained any useful information. There was no paper about the data on the preprint server either.

    ————–

    It might be a good idea to stop participating in press events where the PIO has been so totally successful in Preventing Information Outflow.

    In any case, your w = -0.8+0.6-1.0 for z gt 1 is the only quantitative result available. It also cannot possibly be correct without caveats. Certainly the data must be consistent with w = 0 for z gt 10. Or w could be 0 for z between 1.40 and 1.41.

    I expect the analysis assumed a flat Universe, which is either faith-based following the prophet Guth, or a circular argument based on the consistency of all data with a flat lambda-CDM model which assumes w = -1. Then a correct statement of the significance of the results is that the set of all data has grown by a few percent and it is still all consistent with flat lambda-CDM.

  • Abe

    I apologize ahead of time for my ignorance – I’m probably the equivalent of my mechanic buddy’s archnemesis – the “guy who thinks he knows more than he does”.

    Anyhow, I am just throwing this out there for thoughts. I read the CNN article, and it spurred me to post here because I had recently had a traffic jam “brain storm”.

    So here’s the resulting question/comment:
    Has it ever been proposed that the bigbang was closer to a massive “crystallization event” than an explosion? I ask because I recall watching, as a child, a supersaturated sugar solution “instantly” crystalize from a seed or major disturbance to the container.

    I wondered then, what if Dark Matter is really just the core solution of everything? And, at some point billions of years ago a huge super hot solution of dark matter “soup” just had a huge “crystallization event” with matter as we know it falling out of solution and propelled away as it no longer mixed well with its parent material??

    I wonder if there’d be “dark matter paths” towards the interior of the universe to replace the matter “dropping out” of that primordial solution?

    ugh. I know… crazy talk. :)

    I’m sorry. I don’t pretend to know anything concrete in this field, but figure even as a common joe – it may be a “brainstorming” idea worth at least a mention.

    (PS. while I’m out here feeling selfconscious, I’ll add a followup thought that the “disturbance event” which triggered the mass crystallization originated in an adjacent universe/dimension?)

    FWIW

    Abe Miller

  • Pingback: Coast to Coast | Cosmic Variance()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »