Evolving dark energy?

By Sean Carroll | January 11, 2006 8:16 pm

Don’t be surprised if you keep reading astronomy stories in the news this week — the annual meeting of the American Astronomical Society is underway in Washington DC, and it’s common for groups to announce exciting results at this meeting. Today there was a provocative new claim from Bradley Schaefer at Louisiana State University — the dark energy is evolving in time! (Read about it also from Phil Plait and George Musser.)

Short version of my own take: interesting, but too preliminary to get really excited. Schaefer has used gamma-ray bursts (GRB’s) as standard candles to measure the distance vs. redshift relation deep into the universe’s history — up to redshifts of greater than 6, as opposed to ordinary supernova studies, that are lucky to get much past redshift 1. To pull this off, you want “standard candles” — objects that are really bright (so you can see them far away), and have a known intrinsic luminosity (so you can infer their distance from how bright they appear). True standard candles are hard to find, so we settle for “standardizable” candles — objects that might vary in brightness, but in a way that can be correlated with some other observable property, and therefore accounted for. The classic example is Cepheid variables, which have a relationship between their oscillation period and their intrinsic brightness.

Certain supernovae, known as Type Ia’s, have quite a nice correlation between their peak brightness and the time it takes for them to diminish in brightness. That makes them great standardizable candles, since they’re also really bright. GRB’s are much brighter, but aren’t nearly so easy to standardize — Schaefer used a model in which five different properties were correlated with peak brightness (details). The result? The best fit is a model in which the dark energy density (energy per cubic centimeter) is gradually growing with time, rather than being strictly constant.

GRB Hubble Diagram

If it’s true, this is an amazingly important result. There are four possibilities for why the universe is accelerating: a true cosmological constant (vacuum energy), dynamical (time-dependent) dark energy, a modification of gravity, or something fundamental being missed by all us cosmologists. The first possiblity is the most straightforward and most popular. If it’s not right, the set of theoretical ideas that physicists pursue to help explain the acceleration of the universe will be completely different than if it is right. So we need to know the answer!

What’s more, the best-fit behavior for the dark energy density seems to have it increasing with time, as in phantom energy. In terms of the equation-of-state parameter w, it is less than -1 (or close to -1, but with a positive derivative w’). That’s quite bizarre and unexpected.

GRB w plot

As I said, at this point I’m a bit skeptical, but willing to wait and see. Most importantly, the statistical significance of the finding is only 2.5σ (97% confidence), whereas the informal standard in much of physics for discovering something is 3σ (99% confidence). As a side worry, at these very high redshifts the effect of gravitational lensing becomes crucial. If the light from a GRB passes nearby a mass concentration like a galaxy or cluster, it can easily be amplified in brightness. I am not really an expert on how important this effect is, nor do I know whether it’s been taken into account, but it’s good to keep in mind how little we know about GRB’s and the universe at high redshift more generally.

So my betting money stays on the cosmological constant. But the odds have shifted, just a touch.

Update: Bradley Schaefer, author of the study, was nice enough to leave a detailed comment about what he had actually done and what the implications are. I’m reproducing it here for the benefit of people who don’t necessarily dip into the comments:

Sean has pointed me to this blog and requested me to send along any comments that I might have. His summary at the top is reasonable.

I’d break my results into two parts. The first part is that I’m putting forward a demonstration of a new method to measure Dark Energy by means of using GRBs as standard candles out to high red shift. My work is all rather standard with most everything I’ve done just following what has been in the literature.

The GRB Hubble Diagram has been in print since 2003, with myself and Josh Bloom independently presenting early version in public talks as far back as 2001. Over the past year, several groups have used the GRB Hubble Diagram to starting putting constraints on cosmology. This prior work has always used only one GRB luminosity indicator (various different indicators for the various papers) and for no more than 17 GRBs (neglecting GRBs with only limits).

What I am doing new is I am using much more data and I’m directly addressing the question of the change of the Dark Energy. In all, I am using 52 GRBs and each GRB has 3-4 luminosity indicators on average. So I’ve got a lot more data. And this allows for a demonstration of the GRB Hubble Diagram as a new method.

The advantages of this new method is that it goes to high redshift, that is, it looks at the expansion history of the Universe from 1.7-6.3 in redshift. It is impervious to extinction. Also, I argue that there should be no evolution effects as the GRB luminosity indicators are based on energetics and light travel time (which should not evolve). Another advantage is that we have the data now, with the size of the data base to be doubled within two years by HETE and Swift.

One disadvantage of the GRB Hubble Diagram is that the GRBs are lower in quality than supernovae. Currently my median one sigma error bar is 2.6-times worse in comparing a single GRB and a single supernova. But just as with supernovae, I expect that the accuracy of GRB luminosities can be rapidly improved. [After all, in 1996, I was organizing debates between the gradaute students as to whether Type Ia SNe were standard candles or not.] Another substantial problem that is hard to quantify is that our knowledge of the physical processes in GRBs is not perfect (and certtainly much worse than what we know for SNe). It is rational and prudent for everyone to worry that there are hidden problems (although I now know of none). A simple historical example is how Cepheids were found to have two types with different calibrations.

So the first part of my talk was simply presenting a new method for getting the expansion histoy of the Universe from redshifts up to 6.3. For this, it is pretty confident that the method will work. Inevitably there will be improvements, new data, corrections, and all the usual changes (just as for the supernova).

The second part of my talk was to point out the first results, which I could not avoid giving. It so happens that the first results point against the Cosmological Constant. I agree with Sean that this second part should not be pushed, for various reasons. Foremost is that the result is only 2.5-sigma.

Both parts of my results are being cast onto a background where various large groups are now competing for the a new dedicated satellite.

CATEGORIZED UNDER: Science
  • LambchopofGod

    I’m betting on a phantom universe. Everything has turned out so weirdly since 1998 that it would be perverse for the Universe to start behaving reasonably. Come on, Mother Nature, go the whole hog!

  • Moshe

    That’s really interesting, are there other (independent) ways of measuring w that are expected soon?

    Incidentally, the use of GRB as standard candles was discussed here in the comment section of http://blogs.discovermagazine.com/cosmicvariance/2005/09/12/cosmic-violence/, I remember that since it was so much fun…

  • http://blogs.discovermagazine.com/cosmicvariance/clifford/ Clifford

    Yeah I remember that ‘cos I made some stupid ill-thought-out remarks in the early part of the thread. Was hoping nobody would remember those…. sigh.

    -cvj

  • http://blogs.discovermagazine.com/cosmicvariance/mark/ Mark

    Moshe, our best hope right now is that detailed measurements of the way in which primordial fluctuations lead to later perturbations in the matter density will cast additional light on the question and perhaps even allow us to distinguish between dark energy models and modifications of gravity.

  • Moshe

    Hey, you are in charge of the eraser…(and you are exaggerating…), the thread developed into quite an informative discussion once the real experts joined in. What are the chances of getting such up-to-date specific information, from experts you don’t know personally, w/o the help of a blog?

  • http://blogs.discovermagazine.com/cosmicvariance/clifford/ Clifford

    Actually, I try to limit my use of the power of deletion. If I make a stupid remark as part of a discussion, I leave it there as a record of what actually took place in the discussion…just like non-host conributors have their remarks frozen for all to see. It seems only fair to leave it there: As long as it is not offensive to anyone….. (or just soooooooo stupid…..)

    Ok Moshe: We’d better get back to the physics. Recall what happened on that thread of JoAnne’s that time. :-)

    Cheers,

    -cvj

  • Moshe

    Thanks Mark (and good idea Clifford…).

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    If I make a stupid remark on the blog, I generally wait a couple of weeks and edit it so that it appears under one of my co-blogger’s names.

    Daniel Holz informs me that Schaefer did at least consider the effects of lensing and claims they are under control. On the other hand, George Musser quotes anonymous sources (just like politics!) as saying “It’s flat wrong” and “Don’t waste your time.” As usual, more data will tell.

  • Elliot

    Layman’s interlude….

    O. K. so lets assume for a moment that the data holds up. Then from the above commentary we are left with:

    1. dynamical time dependent dark energy

    Or

    2. some type of change in gravity itself

    Or

    3. ????? Something else

    2 Questions

    Is one of these 3 remaining alternatives the lead candidate and why?

    If it turns out that dark energy is time varying, would it be possible to experimentally determine when it “began” I use that term in a very loose naive way deliberately in that it seems like if it varies in time at some point it may have been close to or equal to 0. I guess what I am saying would it be able to be traced back to inflation or would it have “begun” at some later time?

    Thanks,

    Elliot

  • http://valatan.blogspot.com bittergradstudent

    Elliot–

    One would assume that, in a time variant dark energy model, that the only time that it truly ‘began’ was the big bang. Essentially, the universe would start with this soup of particles, whose density is dominated by radiation, evolve into the current cold dark matter dominated era with the particles frozen into protons electrons and neutrinos, and the radiation component negligible, and then evolve into a dark energy dominated era.

    The weirdness of this whole thing is that the evolution from radiation to matter comes from the fact that if you take a box full of photons, and then expand the walls, the density of the photons reduces as a quartic root, while the density of cold dark matter decreseases as a cube root. If this prediction is correct, then the density of dark energy would actually increase.

    One question: Has anyone tried to model all of this by representing the universe as a 4-D “bubble” in a 5-D spacetime with matter? Cause then one could just think of this as the effects of the pressure from the 5-D matter pushing in on the “bubble.”

  • http://blogs.discovermagazine.com/cosmicvariance/mark/ Mark

    Yes Elliot, with the caveat that it is quite consistent with observations to have an oscillating dark energy model (i.e. one in which dark energy was important at many epochs), so long as the oscillations behave the right way.

    About the 5D question. People certainly have tried to explore whether cosmic acceleration can be due to extra dimensional dynamics. I can’t say there’s a counter-example, but nothing compelling has resulted yet to the best of my knowledge.

  • http://arunsmusings.blogspot.com Arun

    I’ve a different question about dark matter – does galactic dark matter share the rotation of the galaxy? (If it does, why doesn’t it also hit the Cooperstock and T. problem?)

  • Luke

    USA Today, of all places, says Adam Riess is the source of the “It’s wrong, just wrong” quote that Musser cited.

  • Elliot

    Thanks Mark/BGS,

    Then we are to suppose that if true, then something even weirder than we thought is likely going on. D. E. was already pretty weird but the cosmological constant idea seemed to at least sort of make sense.

    So what does this do the the “anthropic” approach to “predicting” the value of the cosmological constant?

    (we hear the sound of giggling even from a naive layperson in the background)

    Elliot

  • Dissident
  • http://eskesthai.blogspot.com/2005/10/microstate-blackhole-production.html Plato
  • http://feynman137.tripod.com/ Science

    Plato, thanks for that link. What people don’t want to see is quantum field theory concepts applied to cosmology. Assume quantum gravity is right. Gauge bosons, gravitons, are exchanged between masses to produce gravitational force.

    This alone is very predictive, because we know the speed of the gauge bosons (light speed, from tests of general relativity), so they’re coming from time-past. There seems to be some kind of fact-blindness which says that calculating gravity this way is just speculation or a pet theory, when QFT implies it. Suppose people just don’t want to see the facts: http://feynman137.tripod.com

  • http://pantheon.yale.edu/~eal48 Eugene

    Interesting. GRBs as luminosity candles have been bandied around for years…in fact I spent a year (before I jumped ship to work with Sean) working with Dan Reichert, Carlos Graziani and Don Lamb trying to do one of these things Shaefer used : Variability vs Luminosity. At that time there was some claims that the more variable the GRB bursts are, the more luminous it was. But those claims were done using really poor statistics. We want to do it right with good statistics, and it turns out to be a red herring. I think Dan spent more time working on it after that, but I stopped following the issue.

    So I guess I am a bit surprised to see it rear its head again, it could be that they used more recent and better quality data (specifically HETE and SWIFT data, which were not available to us). I wonder what Don has to say about this…

  • http://valatan.blogspot.com bittergradstudent

    Elliot–

    Don’t worry about this, most of this stuff was completely new to me when I went into grad school.

    using it to predict the CC is probably an overreach, although people say they do this. What it does is give an acceptable range for what it could be.

    The idea is, if the Cosmological constant is too large, then it is impossible for galaxies to form–the CC causes stuff to fly apart, and if it is larger than a certain value, then gravitational bound states don’t really exist, for the most part. If it is impossible for galaxies to form, then we wouldn’t be here to see them. Therefore, the CC must either be between zero and a number of order 10^-120 if we are to exist. A bunch of other versions of this reasoning exist, where they look to see whether or not atoms can form, amino acids can begin to coalesce, etc.

  • JustAnotherGradStudent

    I remember a few years ago some Australian group(Webb, et al) found time variation in the fine structure constant by looking at distant quasars (astro-ph/0012419). I haven’t heard any recent news on this front, but was wondering if perhaps the variation observed by Schaefer could be at all related to this previous observation (or the converse, rather). I know the Aussies checked for alot of systematic errors, but I’m pretty sure they would have missed a varying lambda!

  • Brad Schaefer

    Sean has pointed me to this blog and requested me to send along any comments that I might have. His summary at the top is reasonable.
    I’d break my results into two parts. The first part is that I’m putting forward a demonstration of a new method to measure Dark Energy by means of using GRBs as standard candles out to high red shift. My work is all rather standard with most everything I’ve done just following what has been in the literature.
    For this, what is new is that I am using

  • http://goatsreadingbooks.blogspot.com Tim D

    Hi Eugene,

    I wonder what Don has to say about this…

    I think he’s fairly skeptical. He’s quoted in the NYT article if you want to check out what he says. :)

    As a GRB guy, I’ll have to say that I’m skeptical also. I’ve done a bit of work on the “standardizing” relations that Schaefer is using, and I have some big reservations about his method. He’s essentially trying a “kitchen sink” method of luminosity estimators: he combines six different estimators of varying reliability and well-testedness. He also includes many GRBs multiple times using different estimators. It’s really hard to say anything about what systematic errors might be plaguing this sort of analysis. And only a 2.5 sigma result on top of that? Hmmmm.

    One question I have for the real cosmologists on this blog is the usefulness of parametrizing the dark energy with w-prime (first order Taylor expansion) given the wide range of redshifts of the GRB sample (z = 0.1 – 6.3) and the relatively recent importance of DE. Thoughts on whether this is legit or not?

    I think that the GRB Hubble diagram will someday be a contendah in the cosmology game, but only with a larger sample of bursts and a single physically motivated and better-understood luminosity estimator. With time we should be able to do this right.

  • Brad Schaefer

    ****Oops, this message is broken up by my accidently hitting a return after a tab. The network link here at the AAS meeting is slow and balky. The message will now be continued****

    The GRB Hubble Diagram has been in print since 2003, with myself and Josh Bloom independently presenting early version in public talks as far back as 2001. Over the past year, several groups have used the GRB Hubble Diagram to starting putting constraints on cosmology. This prior work has always used only one GRB luminosity indicator (various different indicators for the various papers) and for no more than 17 GRBs (neglecting GRBs with only limits).

    What I am doing new is I am using much more data and I’m directly addressing the question of the change of the Dark Energy. In all, I am using 52 GRBs and each GRB has 3-4 luminosity indicators on average. So I’ve got a lot more data. And this allows for a demonstration of the GRB Hubble Diagram as a new method.

    The advantages of this new method is that it goes to high redshift, that is, it looks at the expansion history of the Universe from 1.7-6.3 in redshift. It is impervious to extinction. Also, I argue that there should be no evolution effects as the GRB luminosity indicators are based on energetics and light travel time (which should not evolve). Another advantage is that we have the data now, with the size of the data base to be doubled within two years by HETE and Swift.

    One disadvantage of the GRB Hubble Diagram is that the GRBs are lower in quality than supernovae. Currently my median one sigma error bar is 2.6-times worse in comparing a single GRB and a single supernova. But just as with supernovae, I expect that the accuracy of GRB luminosities can be rapidly improved. [After all, in 1996, I was organizing debates between the gradaute students as to whether Type Ia SNe were standard candles or not.] Another substantial problem that is hard to quantify is that our knowledge of the physical processes in GRBs is not perfect (and certtainly much worse than what we know for SNe). It is rational and prudent for everyone to worry that there are hidden problems (although I now know of none). A simple historical example is how Cepheids were found to have two types with different calibrations.

    So the first part of my talk was simply presenting a new method for getting the expansion histoy of the Universe from redshifts up to 6.3. For this, it is pretty confident that the method will work. Inevitably there will be improvements, new data, corrections, and all the usual changes (just as for the supernova).

    The second part of my talk was to point out the first results, which I could not avoid giving. It so happens that the first results point against the Cosmological Constant. I agree with Sean that this second part should not be pushed, for various reasons. Foremost is that the result is only 2.5-sigma.

    Both parts of my results are being cast onto a background where various large groups are now competing for the a new dedicated satellite.

  • Elliot

    How powerful is this medium? I think the fact that author of the article is here commenting directly in this thread is a prime example of how effective it can be.

    Elliot

  • http://pantheon.yale.edu/~eal48 Eugene

    Hey Tim,

    Good to hear from you!

    Thanks for the reply. I’ll go hunt down Don’s comment.

    On your question about w’ being the parameter for varying DE, it’s the standard thing that people do. But that does not mean it’s the only thing nor the right thing. In fact, once you parameterize this way, (say by taylor expanding it), you are secretly restricting the possible class of w(z) in the total parameter space. For example, if you parameterize it using w = w_0 + w’z, fit your observations to it, and you find w’ has to be very small, then if you conclude that rapidly evolving w(z) is ruled out, you are making a mistake. This is because you have ruled out rapidly evolving w(z) by the choice of models.

    It’s legit, but one has to becareful about what it means.

  • Paul Valletta

    Lets see if I got this right?..The GRB’s are closer to the big-bang than our Galaxy…the local expansion field to the GRB, must be where all the action is, this is where most of the Universe must be expanding greatest?..otherwise, our local field around our Galaxy, if it was expanding at the same rate, would mean that we would physically observe our closest neigbours..ie..Andromeda, at a vastly greater redshift than what is observed.

    So thus, the increasing expansion, must be calibrated in some way to the appearence/increase of excess GRB’s?

    The Luminosity Function must itself, be tuned into the expansion rate, and therefore the light emmanating from a part of the Universe that is expanding at a greater rate, would have its light “stretched” locally, at the farthest location from the Milky Way, and could give the impression that its luminosity is greater than it actually is?

  • http://eskesthai.blogspot.com/2005/10/microstate-blackhole-production.html Plato

    Science?

    I’m recovering from CSL-1 infor..ma..tion…….:)

    Ya Paul I’m having hard time accepting how this information can be read, considering the lensing that goes on. If from the distance information is to travel, how do we know that the fastest route is not being considered, while the influences along the way can hold this information?

    Would this not Probably be one of these stupid statements that we make sometimes?:)Galaxie formations as part of the larger expansion process create curvature parameter readings askew?

    I dunno…..

  • Shantanu

    Sean, Brad , others,
    Do these and previous results from the GRB Hubble diagram conclusively rule out claims of
    phioton-axion oscillation (which have been proposed to explain the SN results)?

  • Steve

    My question for Brad (hope he’s still monitoring) relates to his statistical procedures, not the physics.

    The GRB diagram presented above presents a “best fit” curve for magnitude vs. redshift. As both Dan and others attending the original meeting have already commented, the variability of the magnitude data is quite high, especially the next to largest redshift datum at z~5.xx.

    I wonder if Brad fitted his data using linear ordinary least squares, linearizing the model in the variables (using log transformations, for example), or used a form of non-linear regression. Looking at the “details” of “calibrating the luminosity relations” provided in the link, I’d guess he used a linear fitting technique linearized by transformation in the redshift variable.

    If so, I then wonder if Brad took advantage of the variability of the magnitude data in fitting his model. A well-studied fitting technique called variance weighted least squares is designed to use variance in the variable on the left-hand side of the model to inversely weight the importance of each data point in the fit according to its error of measurement, data points with high error contributing less to the fit and data points with low error contributing more.

    Given the nature of the GRB data distribution, the sparse and highly variable data at high red shift will unduly influence the entire fit of a linear ordinary least squares regression. Those high-z points will have what statisticians call “high leverage” in the model. Weighted least squares fitting would address some of that problem, could definitely change the difference between the “cosmological constant” curve and the observed curve, and could also alter the calculated probability that the two curves differ by chance alone if only ordinary least squares fitting was used for the presented fits.

    It would also appear that the fitted log(L) – log(V) relationship shown in “calibrating the luminosity relations” violates the assumption of homogeneity of error variance that ordinary least squares fitting requires, so perhaps I’m wrong in assuming that this was the fitting technique used for the magnitude – z relationship.

    Though I’m writing from ignorance of details that might have already been presented and of standard practices in handling such data in physics, I’d be interested in hearing comments or corrections from Dan, Sean, and the rest of the list.

  • Brad Schaefer

    Here are a answers to the questions posed earlier:

    I don’t know what the GRB Hubble Diagram has to say about photon-axion oscillations.

    The effects of gravitational lensing are to magnify and demagnify the GRB, making them appear brighter or dimmer than would be deduced from the GRB luminosity indicators. This will cause some ‘random’ noise in the vertical direction of the Hubble Diagram. This noise gets larger as we go to higher redshift because the line of sight will pass near more-and-more galaxies. The same problem exists for supernovae, but for GRBs the effects will be larger due to the higher redshifts. Premadi & Martel (astro-ph/0503446) show that the effects don’t rise linearly; with the 1-sigma scatter (in the distance modulus) in the Hubble Diagram at z=1 is ~0.10 mag and at z=5 is ~0.34 mag. This is generally smaller than my typical error, so the effect won’t dominate. Lensing conserves flux, and with the *average* magnification of unity there should simplistically be no effect on the observed shape of the Hubble Diagram. But with the distribution being slightly skewed, any offsets will depend on the numbers of GRBs (of which I have 52 [9 in the z>3 ‘bin’]) as described by Holz & Linder (2005, ApJ, 631, 678). The real story is more complex as it depends on whether more GRBs just outside the limiting distance are magnified into the sample than the GRBs just inside the limiting distance that are demagnified out of the sample. The limits for inclusion in the sample of GBRs with redshifts are fuzzily known, so this calculation is not easy. Holz tells me that he would expect the effects to be small (perhaps 0.05 mag in the distance modulus at z~6) when comparing high and low redshifts; and such effects are not significant given my larger error bars. A real calculation is needed to be sure of all this.

    Steve asked about whether I used ordinary least squares techniques for fitting the five calibration relations. As he points out, the error bars vary substantially from burst-to-burst, so some account must be made of this. I do so by minimizing the chi-square of the fit, as this has the natural variance in the denominator. Another point is that the observed scatter about the best fit calibration relations is substantially larger than expected from the measurement errors on the indicators alone. This implies that there is some additional scatter, caused for example by gravitational lensing. This is no surprise, as for example we don’t know what the best way to define the ‘variability’. I model this by adding in quadrature a constant plus the Premadi & Martel lensing scatter to the scatter from the measurement errors. I vary the constant until the best fit reduced chi-square is unity. Another point is that I do the fit in log-log space. This is because (a) the expected relations are power laws from theory, (b) the error sources are multiplicative, and (c ) the observed error distributions appear Gaussian in log-space. From the best fit calibration curves, I get values of Log(L) for each measured indicator for each burst. Each will have an associated error bar, generally dominated by the systematic errors. Then I combine the various Log(L) values for each burst in a weighted average, to get a combined Log(L) which will then yeild a distance modulus.

  • http://eskesthai.blogspot.com/2005/10/microstate-blackhole-production.html Plato

    As a layman I wonder, if you implicate lagrange points, how would this change the way you see?

    It was phrased earlier as information “skewed” but held in context of that implcation, might we have rasied the bar?

    You had to look at dark energy in context, a little differently, as it pervades the universe in that expansitory progress?

  • Steve

    Brad, thanks for the additional details of your fitting procedures. Could you point us to a prepublication reprint of your work, all the slides of your presentation or post the details of the Cosmological Constant (CC) “fit” and those of your fit to the GRB data? I’d like to see the equations and the constants (with standard errors) that were derived as well as the formal test between observation and CC prediction.

    In another issue, if theory predicts that scatter in the GRB magnitude estimates will increase with increasing redshift, it will require an enormous amount of high red shift GRB data to reach the very low probability values that the physics community appears to demand for accepting the evidence for a significant difference between observation and CC predictions. Will such additional high red shift GRB data become available in the foreseeable future? Do you believe that the substantial background gamma count really represents presently unresolved GRBs, implying that there is an enormous number of high red shift GRBs just waiting for detection and measurement with improved techniques and continued observation? Is, in fact, the backgound gamma count higher than that predicted for black body radiation derived from the various big bang models and cosmological background radiation observations?

    Questions, questions!

  • Steve

    Sorry for my request for slides of the presentation, above. I see that those questions have already been addressed by Brad at http://www.phys.lsu.edu/GRBHD/details/ .

  • Pingback: Cycle Quark » Can Gamma Ray Bursts Be Used to Measure Dark Energy()

  • Pingback: Cycle Quark » Can Gamma Ray Bursts Be Used to Measure Dark Energy()

  • Shantanu

    Sean, how strong is the statistical evidence for dark energy from ONLY type 1a SN(with latest
    data)? In astro-ph/0207347 (which probably includes data up to 2001 or so) they claim the evidence is only 2 sigma.
    Recently astro-ph/0511628 claims that the latest SN data rule out all cosmologies.
    so question is how strong is the statistical evidence dor dark energy from the latest SN data ONLY?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Shantanu, newer data are certainly better: see astro-ph/0309368 or astro-ph/0402512. And they certainly don’t rule out all cosmologies; they’re perfectly consistent with ordinary LambdaCDM.

    But supernovae by themselves are not extremely statistically significant indicators of the existence of dark energy; I don’t know how many sigma, but it’s just a few. But that’s because you can come close to fitting them if you assume the universe is nearly empty of matter and highly spatially curved. And we know those things just aren’t true: we’ve measured the matter density from dynamics, and the CMB tells us that there is not appreciable spatial curvature. In a flat universe, the supernovae require dark energy at more than ten sigma.

  • Paul Valletta

    A new take on Matter’s: http://arxiv.org/abs/astro-ph/0601517

    The authors delve into things, and cite Sean’s paper.

  • Pingback: The future of the universe | Cosmic Variance()

  • cox

    if different colors reflect different light speeds then why does mars appear red when viewed from earth and also appears red when viewed from mars? what is actually the speed of light or for that matter the speed of sight when considered that light breaks up into the spectrum when passed through a prism? is the speed of sight a significant variable previously ignored? is temperature a variable to the speed of light? are our measurements of star distances highly exaggerated because of the unknown extreme cold in outer space?

  • Elliot

    cox,

    your questions contain some myths that need to be addressed before they can be answered.

    1) the speed of light is constant (the same) for all colors, the frequency and wavelength change but not the speed.

    2) there is no physical parameter called “speed of sight”. It is just the speed of light + the processing of the light by the eye and nervous system. Not a meaningful parameter to consider.

    3) Temperature does not have an affect on the speed of light. So there is a high degree of confidence that distances to stars are not exaggerated by this.

    4) Mars is red because the material it is composed of. Therefore it would look the same from Earth and Mars.

    Hope that helps.

  • Pingback: light dimmer()

  • Pingback: Cosmology at Professor Cormac O’Raifeartaigh’s blog « Gauge theory mechanisms()

  • Pingback: The future of the universe | Cosmic Variance | Discover Magazine()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »