Richard Feynman on Boltzmann Brains

By Sean Carroll | December 29, 2008 10:34 am

The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

We’re still working on that.

CATEGORIZED UNDER: Science, Time
  • Mike

    Very nice!

    I’d like to say that I think the Boltzmann brain argument is stronger than requiring that our universe is not a simple statistical fluctuation toward low entropy. IF we assume the dark energy is vacuum energy and IF we assume the universe is closed — or IF we assume the universe is open or flat but the statistics of its events can be deduced by studying a large-but-finite comoving volume — then I think we have a Boltzmann brain problem UNLESS our universe can decay, that is unless there is a multiverse. (Page showed a rapid decay rate solves the problem without resort to spacetime measure, meaning the “multiverse” could only be two states: our dS and an AdS. But that seems less probable than there being transitions to other dS vacua, with the proper spacetime measure being such that normal observers like us dominate over the Boltzmann brains.)

    Perhaps I’ve missed something, though. One thing I haven’t thought about in detail is the effect of upward jumps to classical slow-roll inflation — though I’m under the impression these can’t solve the problem. (Jumps to eternal inflation certainly can — assuming appropriate spacetime measure — but then we have a multiverse.)

    One thing that disturbs me about this line of reasoning is it seems that I can deduce very significant properties of the “universe” without really leaving my office (though I must know the dark energy is vacuum energy).

  • aleph

    Consider the development of the periodic table via stellar nucleosynthesis. This apparent decrease in entropy can only be accounted for by the much larger losses of energy due to thermonuclear fusion, all the way up to iron, followed by supernova explosion-powered synthesis of the heavy elements, past iron.

    If we then look at the use of these elements by living systems, we see far more complexity – carbon, sulfur, nitrogen, iron, etc. – all play fundamental roles in life, which persists by using energy (from solar radiation, mostly) to reduce local entropy, even though universal entropy continues to increase.

    This is a very different situation from a universe in which there are no local entropy decreases. Imagine if the global entropy situation was mirrored in all local entropy situations – the dying embers of an explosion, for example. Stars and living systems have the remarkable ability to decrease local entropy (after dG = dH – TdS) – for more fun with that, see:

    http://www.2ndlaw.com/gibbs.html

    What I’m wondering is this: does the universe have surroundings, and can it exchange heat or mass with those surroundings?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Mike– I think that it’s very plausible that the Boltzmann Brain argument has important consequences for eternally-accelerating cosmologies, but I think it’s important to distinguish that case from the original one considered by Boltzmann. In de Sitter, everything is infinite and you must make some assumptions about measures. The original Boltzmann case is finite — finite numbers of degrees of freedom, or finite dimensions in Hilbert space — and then the measure is perfectly unambiguous, so there is no room for argument. It’s important that everyone agree on this case if we’re to make any progress (and not everyone does).

    But basically I agree that the combination of positive vacuum energy + BB argument provides very good reason to believe in some sort of multiverse.

  • ScentOfViolets

    I’ve never really bought into the argument, for the simple reason that there is no quantitative content. That is, within broad limits, the argument seems to apply no matter how small the ordered region is. Make it the size of the solar system, what could reasonably be observed in the seventeenth century, and the argument applies. Make it galactic-sized, say, what could be reasonably inferred in the 19th century, and the argument applies.

    Iow, the argument always applies . . . until it doesn’t. If there is some observational technique that reveals that the entire observable universe is just a tiny chip of order in a much larger sea of chaos, say, 10^2000 times larger, what happens then?

  • George Musser

    It’s always helpful to see an idea expressed in a variety of ways. In fact, Feynman’s version makes me realize there’s one aspect of the Boltzmann Brain argument I don’t understand. We expect a thermal fluctuation to be the smallest possible fluctuation consistent with our observations — thus the brain that thinks it sees an ordered universe is more likely than an entire ordered universe. Yet a brain that thinks it sees an ordered universe in a sense has the same complexity as that universe: the subjective mind states are a model of a physical universe. When I ask myself whether I am a Boltzmann Brain, I am struck by the regularity that I perceive, and for such a perception of regularity to arise seems just as probable as the regularity itself.

    In other words, is it really true that the brain is overwhelmingly more probable than the entire universe? Sean, can you lead me out of the thicket I find myself in?

    George

  • CarlN

    It is much simpler to arrange a low-entropy beginning than a messy high-entropy beginning. We should not be surprised to find a low-entropy beginning.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    George, it’s just that you’re not calculating entropy correctly. “The same complexity” is not something that is well-defined, while “the change in entropy” is. The change in entropy required to form a brain with an image of the universe is much smaller (and therefore much more likely) than the change in entropy required to actually make the corresponding macrostate of the universe.

    Think of Feynman’s example of a box of two different kinds of gas, which randomly fluctuates into a state where one gas is all on one side and the other is all on the other. The change in complexity (which we might roughly interpret as the change in the number of bits required to specify the state) is completely independent of the size of the box and the number of molecules of gas inside, but the required change in entropy is certainly larger for larger boxes.

  • http://www.meatmonkeywarfare.com Andrew B. Chason

    I want to understand it all, I really do; but I think this made my eyes bleed just trying to form images and concepts from these chaotic strings of words spewed forth into my head.

    Are we a mistake, is that what is is getting at? A cosmic accident?

    I though I was smarter than all this, but… Let me know when the ‘For Dummies’ version of these hypotheticals come out. I would love to be able to follow along.

  • ScentOfViolets

    But the argument that the change in entropy is much smaller for a smaller ensemble is making an assumption about independence of events that is not warranted. Given a single improbable event, other improbable events become much more likely if in fact they are not independent. As an analogy, consider an empty box partitioned into two halves surrounded by a gas, and which has a valve that may or may not open on a probabilistic basis. Now, we can make the probability of opening as low as we like, say on the order of all the gas molecules being on one side of the box. But when it does happen, gas will indeed enter one side of the box and will be, briefly, in a low-probability case. Now, in the traditional experiment, the probabilities are indeed independent, and it makes some sense to invoke a Boltzmann Brain type of argument. In the latter case, with the extra valve, it very obviously does not.

    Is there any way to distinguish the two cases? For the life of me, I don’t see how this can be done with the current state of the art. But the point is that, as larger and larger systems are considered, it becomes impossible to prove or disprove the existence of the latter setup.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Andrew, it’s not that we are a mistake — although that might be true. The problem is that the early universe looks very unnaturally ordered to us, and we’d like to understand why. One idea is that it was just a random fluctuation from a disordered collection of particles. But Feynman (and others) are pointing out that such an idea doesn’t really work once you examine it closely.

    So: to understand why the early universe was so ordered, we have to work harder. Nobody knows the final answer to that, although some of us have ideas.

  • http://www.pipelin.com/~lenornst/index.html Leonard Ornstein

    Sean, with respect to ‘order to disorder’, entropy and time’s arrow:

    Increase in entropy is characterized as typically being associated with increase in ‘disorder’ and a smoothing out and ‘reducing’ of temperature differences within an isolated system which is allowed to proceed to a state of equilibrium. Such increases usually occur spontaneously. ‘Paradoxically’, entropy increase often can be associated with large increases in local ordering, like the phase change of crystallization on cooling. This makes the ‘order to disorder’ characterization especially unsatisfying; not the most desirable way to describe increase in entropy.

    Born interpreted the square of amplitudes of the waves described by Schroedinger’s equation as probabilities. Shannon described information in terms of the logarithm of probabilities – and information has been ‘equated’ with entropy.

    In statistical mechanics, entropy is expressed as being proportional to the logarithm of the number of possible ‘arrangements’ of the ensemble of ‘particles’ constituting a system. Increase in entropy is associated with the statistical tendency for a closed system (of an ensemble of more than a ‘few particles’) to spontaneously transition from less probable to more probable states. This formulation seems to eliminate the ‘paradoxes’.

    In models of a “Cycling” steady-state universe or of oscillating universes, entropy (non-paradoxically) increases both during collapses ‘towards singularities’ as well as during expansion phases, and no zero entropy ‘origin’ is required.

    Doesn’t this suggest that agreeing to start with a primitive axiom having to do with ‘causality’, removing the apparent ‘reversibility’ of classical and quantum physics, might unambiguously set the direction of time’s arrow?

    (I’ve put this question to you previously – without any response.)

  • ScentOfViolets

    Actually, Sean, they aren’t pointing out any such thing, and, insofar as I know, I don’t think anyone proposes that the (very)early universe was just a random collection of disordered particles.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Leonard, I don’t understand what you are saying, so I can’t sensibly comment. Using “disorder” as a gloss on “entropy” is by no means perfect, but it’s a convenient shorthand.

    Scent, all I can suggest is that you read the post carefully again.

  • Chris W.

    What about the role of gravity—more precisely, gravitational collapse—in all this? The classical thermodynamic arguments took no account of it. The canonical case of “heterogeneous gas in a box” always strikes me as suspiciously unrepresentative of the actual universe. How does a system that has evolved to collapsed to a higher entropy, lower energy state (in the extreme case, a black hole) fluctuate away from it?

    For that matter, classical statistical mechanics and condensed matter—considered as an low energy, stable endpoint of a non-equilibrium process (which is fundamentally quantum mechanical)—seem to co-exist uneasily.

    More generally, the actual universe is not in equilibrium. One can respond, “right, it’s the result of a fluctuation, and is returning to equilibrium”. Actually, that’s an odd remark; saying that it is a fluctuation presupposes a state of statistical equilibrium within which the fluctuation occurs. The appearance and relaxation of the fluctuation isn’t a return to equilibrium, it’s merely associated with its inherently statistical character.

    Do you see what I’m struggling with here?

  • ScentOfViolets

    I have. I think you’re reading way too much into what is really a rather vapid – and anthropic – argument. And I repeat – I don’t know of anyone who seriously thinks that the very early universe was just ‘some collection of random particles’. And not because of some Boltzmann Brain type of argument, I might add. Saying this smacks of rhetoric more than a real argument.

  • http://theczardictates.blogspot.com CarlZ

    Either (a) I’m missing something or (b) a lot of very smart scientists are very confused about basic probability.

    The flaw in the Boltzmann brain paradox, as I understand it, is that the argument boils down to this: “The Random Fluctuations theory implies that there’s just a one in a gazillion chance that, out of all possible random fluctuations, one like ours would result because smaller fluctuations are much more likely. But here we are, and that’s incredibly unlikely — so the theory must be wrong, refuted by experiment.”

    But that doesn’t follow at all. The Boltzmann Brains paradox is an argument about *probabilities* of outcomes, and you can’t deduce anything from just one trial.

    I would understand the Boltzmann brain argument if we could run many experiments (or equivalently, observe many universes) and count the outcomes: if we got lots of Boltzmann Brain universes and very few like ours, we would lean towards believing our universe is indeed a lucky random fluctuation. If not, if instead we get many more like ours than predicted, we discard the random fluctuation theory *because it failed the experiment*. BUT… and it’s a really big BUT… we don’t have a lot of experiments. We have just one, i.e. the universe we observe. And in an experiment with probabilistically distributed outcomes, that doesn’t tell us anything. (By contrast, if we were talking about a test with a deterministic outcome, one failure to observe a Boltzmann universe would indeed be all the counterexample we need).

    Maybe cosmologists are so used to working with a sample size of one that they need to review some basic probability before they are let loose on the larger sample of the multiverse?

    Or am I misunderstanding something?

  • ScentOfViolets

    If you are, Carl, then so am I. It also looks like there is some confusion regarding conditional probabilities.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    CarlZ– yes, you are misunderstanding, I’m afraid. The argument (which is right up there in the post, honestly) is *not* that we are here, which is unlikely. It’s that, given that we are here (and given any feature you think we know about the universe), the statistical-fluctuation hypothesis makes a very strong prediction: namely, that every time we look somewhere in the universe we haven’t looked yet, we should see thermal equilibrium. And we don’t, so that hypothesis is falsified.

    Chris W.– these are perfectly good things to worry about. We don’t understand how to calculate entropy in the presence of gravity, because we don’t understand the space of microstates. But the good news is that arguments from statistical mechanics don’t depend sensitively on the specific dynamics of the theory under consideration. Just on basic principles like unitarity and time-independence of the Hamiltonian. Those might not ultimately hold in the real world, but they are all you need to make these arguments.

  • Chris W.

    Thanks, Sean. That also reminds one of what is at stake in the question of information loss in black holes.

  • Aloysius

    Sean…

    “It’s that, given that we are here (and given any feature you think we know about the universe), the statistical-fluctuation hypothesis makes a very strong prediction: namely, that every time we look somewhere in the universe we haven’t looked yet, we should see thermal equilibrium. And we don’t, so that hypothesis is falsified.”

    I’m not sure that really falsifies the hypothesis. The statistical hypothesis predicts that generically we should expect to see equilibrium, but that equilibrium won’t hold absolutely everywhere. Failing to see equilibrium everywhere doesn’t falsify the hypothesis. It’s certainly an issue that needs more consideration, but it doesn’t kill the statistical hypothesis stone dead. Maybe whatever mechanism it is that governs thermal fluctuations guarantees that regions of disequilibrium clump up?

    I was wondering, relatedly, how the odds stack up when you compare the probability of a Boltzmann Brain arising via random fluctuations–complete with all the time-dependent dynamical interconnections making up a whole lifetime’s worth of human mental processing–with that of not a whole specified universe like our own, but just some roughly-suitable mostly-undifferentiated Big Bang blob? That is, how much fluctuation do you really need to kick things off before other physical processes step in and start governing the time-evolution of your system in a non-thermal way? Could it be that suitably-large fluctuations generically lead to Big Bang states and that the laws of nature will then generically evolve these into interesting universes?

  • JimV

    The first time that I saw Sean’s argument on Boltzmann’s Brains, it went something like this (as I recall): by the statistical fluctuation hypothesis, we should be Boltzmann’s Brains – but we’re not, QED. A lot of the counter-arguments given above occurred to me then. I follow Feynman’s argument much better, but it stills seems to me there is a conditional probability issue, as SoV says. How do we know it is easier (more likely) to create a Boltzmann’s Brain or a single Solar System by random fluctuation than it is to create the conditions under which solar systems and brains can evolve?

  • http://Capitalistimperialistpig.blogspot.com capitalistimperialistpig

    I don’t see the logic that says we have experimental evidence against the idea of the Boltzmann Brain – If my brain is BB existing as a statistical fluctuation for a moment, then all my memories of experiments are just even more ephemeral traces in that BB. The real argument against a BB is philosophical – if we believe it science is impossible. The BB is essentially solipsist, and is sterile for the same reason.

    It is curious, though, that the cosmic billiard balls started out so neatly racked. Is it possible to understand this in any deep sense? Only if you know something about the rack – or the racker.

    Now if we wanted to study this experimentally, we might build a very detailed model of the universe, start the billiard balls off neatly racked, and watch it evolve. Perhaps any intelligent beings – or even any BBs – that evolved might have a better insight.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    JimV– we know it is more likely just by the standard arguments of statistical mechanics. A state of the form “brain + thermal equilibrium” is much higher entropy than a state of the form “brain + planet + solar system etc.” Which means that there are many more microstates of the former kind than the latter kind. Which means there are many more trajectories in phase space that pass through states of the former kind than pass through states of the latter kind. Which means, if conventional statistical mechanics is to be believed, that states of the former kind are much more likely to arise via random fluctuations.

    So one of the assumptions of the model — bounded phase space, eternal evolution, microscopic irreversibility, time-independent Hamiltonian — must be false. It’s interesting to try to figure out which one.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    CIP, there is something to that. One could always argue that there is some probability to fluctuate into the state of a brain that has a complicated set of (completely false) memories consistent with being embedded in a large low-entropy universe. Of course there are many more likely things to fluctuate into, even if we restrict our attention to brains, but it is possible. Such a brain, of course, would have no reliable knowledge of the universe, so that scenario is cognitively unstable — even if it’s true, there would be know way of knowing it. And certainly nobody is going to behave as if it is true.

  • changcho

    The “Lectures on Physics” are supposed to be for undergraduates, but I always use it as a great reference. Also, it really is a treasure trove, as that gem of Feynman’s arguments about the arrow of time indicates. All of these paragraphs are in the “Ratchet & Pawl” chapter, in Vol. 1 (Ratchet & Pawl? Weird title, but like almost all of Feynman’s stuff very worth reading).

  • Ja Muller

    Sean said

    “So one of the assumptions of the model — bounded phase space, eternal evolution, microscopic irreversibility, time-independent Hamiltonian — must be false. It’s interesting to try to figure out which one.”

    Would simply falsyfying one of those conditions be enough? I mean lets say that we add a small irreversible term in the microscopic Hamiltonian. (It seems like if has to be small since we haven’t found it yet.) The problem is that Boltmann Brains are not a decent approximation of our universe that isn’t exact, it is so spectacularly wrong that it seems odd that this can arise from a term that is so small.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Right, there’s no reason to think that abandoning one of those assumptions would be sufficient, but it seems necessary. The trick is to abandon as little as possible while working toward a sensible theory that predicts the kind of universe we actually see; no mean feat.

  • Kaleberg

    changcho: Ratchet & Pawl refers to two parts of mechanism that enforce motion in a single direction. The ratchet has the directional teeth and the pawl catches each tooth as it passes so that it cannot go backwards.

    capitalistimperialistpig: I think XKCD addressed your approach, but without reaching a conclusion in http://xkcd.com/505/

    Also, being a computer guy I liked Scott Aaronson’s quantum computation oriented view of the problem in http://scottaaronson.com/blog/?p=368. He argues that the arrow of time has to do with space being reusable while time is not. This sounds circular, but he points out that if time were reusable we would have a much different universe, so perhaps the arrow of time is simply a conditional property of any multidimensional structure with one or more dimensions lacking reusability. That seems to push the problem up the tree for a bit, but it might offer us a hint.

  • Count Iblis

    In Tegmark’s mathematical multiverse the problem is much more severe, because as long as some simple mathematical model generates observers as Boltzmann brains, you are forced to address the question why we are not Boltzmann brains in the universe defined by that particular model.

    One could try to solve this problem by considering observers as universes in their own right.

    On the set of all mathematical models one needs to specify a measure. There exist a natural measure on the set of all algorithms (the Solomonoff-Levin distribution). This distribution decays exponentially for large algorithmic complexities.

    Now, an observer considered as a universe has a huge algorithmic complexity unless the observer can be generated in a simple to specify universe. The information needed to specify an observer can then be provided by specifying the laws of physics, the initial conditions, and the location of the observer. The amount of information will then be less, but not if the observer only arises as a Boltzmann brain.

  • CarlN

    Sean, the “eternal evolution” alternative is certainly wrong.

  • Ben Button

    SOV said: “And I repeat – I don’t know of anyone who seriously thinks that the very early universe was just ’some collection of random particles’.”

    Then you haven’t met Andrei Linde….but anyway, how *do* “serious” people think about the very early universe, in your experience?

  • Me

    I am a biologist and therefore not as “qualified” as physicists (who’ve thought about this a LOT more than I have) but it seems to me there is a fundamental flaw to most of the arguments here. The underlying assumptions, if I’ve got it right, are that (1) the universe if made of particles (let’s ignore the particle versus wave business for this discussion), (2) let’s shake ‘em up and see how they should statistically sort out, thinking about entropy, etc. And so how we we get stuff as incredibly complex as what we observe today?

    And the problem with this thinking, in my view, is that it makes the assumption that the particles don’t interact with one another to any extent except for by random collision (largely inelastic) and maybe some weak gravitational stuff that’s not significant on a particle-to-particle basis. But we know that the universe of today is composed of all kinds of “stuff” that is differentially sticky. It’s called chemistry (and its underlying physics of course). So, once particles came into existence, and became different from one another (elements for example), then they would have differential attractions/repulsions and these would favor “stuff” segregating from other “stuff”. Viola, asymmetries that are less likely to succumb to entropic falling apart. For example, rocks. Ignoring erosive forces, you can’t tell me a rock is likely to spontaneously fall apart because entropy favors its randomization with the rest of the universe. Yes, physics says this is possible. But not probable. And it is probabilities we are focused on with these types of examples.

    I hope my thoughts here aren’t way off base as I know how it is to read someone’s nutty ideas on a subject about which they ought not be putting in their two cents. If this is offbase, read on.

    Me

  • Ben Button

    Speaking of Feynman on the arrow of time, I found in this interesting article:

    http://plato.stanford.edu/entries/time-thermo/

    the following statement:

    “But perhaps we were wrong in the first place to think of the Past Hypothesis as a contingent boundary condition. The question ‘why these special initial conditions?’ would be answered with ‘it’s physically impossible for them to be otherwise,’ which is always a conversation stopper. Indeed, Feynman (1965, 116) speaks this way when explaining the statistical version of the second law.”

    The reference is to “The Character of Physical Law”. Certainly it would be extremely satisfying if low initial entropy turned out to be a consequence of a demand that the laws of physics should be internally consistent…..

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Me– always happy to hear from biologists (or whomever). None of the arguments above really depend on the assumptions you are worried about; it’s just an easy short-hand way of speaking. When things like chemistry come into the game, all of our ideas about entropy and statistical mechanics work just as well as ever, but you have to be very careful to really keep track of everything in your system. In particular, when two atoms stick together, they enter a lower-energy state, and the only way they do that is by emitting one or more photons. The entropy of the system “molecule + photons” will be higher than that of the system “two separate atoms, no photons.” (At least, if the reaction is thermodynamically favored; more generally, there will be some equilibrium distribution of possibilities.)

    But the entropy would be even higher than that if the molecule collapsed into a very small black hole, then decayed into a number of photons. Basically, a truly high-entropy state wouldn’t look anything at all like the stuff we see around us, not by a long shot; so details about chemical reactions aren’t going to be very important to the discussion.

  • http://decartes-einstein.blogspot.com/ Phil Warnell

    Roger Penrose has been working on this whole quandary from the perspective of the universe as a whole. At a public lecture I attended a few months back he argued for a cycling universe where at the end of each phrase you have a state in which all matter has decayed to entirely energy, (photons) leaving in essence simply only a energy potential where locality would essentially have no meaning and as such this potential alone would lead into the following phase. He contended that there could be a relic of a past phase(s) hidden within CMB data. It would be interesting to learn if recent analysis has strengthened or weakened his proposal as it does seem to be a nifty way of getting around this low entropy/high energy initial state problem.

  • Fred

    I still fail to see why there is a conceptual problem with having a low entropy early universe (by that I mean, relative to the higher entropy state we have now). Indeed you can take the point of view that its simply a boundary condition thats imposed by hand to match observation, or that its simply the natural progression of the thermodynamic arrow of time. So whats the fuss?

    If the opposite were true (a high entropy initial state), then we wouldn’t be here b/c life would have had immense difficulty to form, and the laws of thermodynamics would be in jeapordy of being falsified. So I’m missing something.

    I think the confusion arises b/c there seems to be a paradox when the operation t — > -t is performed too naively and not interpreted with due care. Feynmann perfectly explains away this problem in his lectures.

  • Ben Button

    “So whats the fuss? ”

    The fuss is that we have a feature of the universe that we don’t know how to explain. *Why* was the entropy so low? We don’t know. Nobody is claiming that there are conceptual problems with having low entropy; on the contrary, that is a fact accepted by all.

    The fact that we don’t know how to explain this aspect of the early universe means that there is something missing from our theories; probably something so important that neglecting it may mean that we are saying lots of wrong things.

  • Count Iblis

    Fred, the problem is that your statement: “then we wouldn’t be here…” is not true, because you would be here as a Boltzmann brain. In fact, the low low entropy initial conditions don’t solve the problem, as you would still be more likely to find yourself in a Boltzmann brain state long after the heat death of the universe than existing in the way you do now.

    See also this article:

    http://arxiv.org/abs/hep-th/0612137

  • http://www.americafree.tv Marshall Eubanks

    How does a system that has evolved to collapsed to a higher entropy, lower energy state (in the extreme case, a black hole) fluctuate away from it?

    In the long run in an open universe Hawking radiation should evaporate black holes. We are talking very long times now, longer than the decay of matter itself.

  • http://www.gregegan.net/ Greg Egan

    Maybe there will be an infinite number of Boltzmann brains after the heat death of the universe … or maybe there won’t, because the universe will decay in some fashion that precludes that heat death. Eventually we might know the answer to that question — by directly uncovering the details of the fundamental physics and cosmology that will determine these thing. But our own failure to be Boltzmann brains says nothing whatsoever about the relative frequencies of different kinds of observers across the entire history of the universe.

    Specifically, the fact that we are not BBs is not a valid way to rule out cosmological models in which most observers are BBs — so long as those models do not entirely preclude non-BB observers like us.

    Nobody plucked us at random from a bag containing every conscious being that ever lived or ever will live. We cannot infer anything, even probabilistically, about the number of Boltzmann brains across the whole of spacetime from our own failure to be one. The tiny grain of truth from which that fallacy springs is this: if every conscious being that ever lived, or ever will live, uniformly adopted the strategy of assuming that most observers resembled themselves — in other words, if absolutely every observer has a policy of assuming that they are in the majority class — then that will lead to the greatest number of observers being correct. Whatever the majority actually is — Boltzmann brains or normal brains — the majority will have guessed correctly what the majority is.

    But the tautological results of that victory for the majority don’t provide us with any information about other observers anywhere, least of all other observers in the distant future. We exist, and we’re probably not Boltzmann brains. That’s it, that’s all the data we actually have. Various averages computed over the set of all observers contain more information … but we don’t have access to those averages.

    What’s more, we have no rational reason for doing things — or believing things — solely because they optimise those kinds of averages. This is not a game where we get some share of the pay-off for being a good team player. I don’t know what the correct label is for saying “Hey, if we assume it’s likely that we are the majority class, then we will be adopting a brilliant strategy that — if adopted by all observers — will lead to a high expectation value for the proportion of all observers who were correct” … but I don’t know why anyone would mistake such a strategy for any of the goals of science.

  • Nemo

    I think one can also make the argument that the “arrow of time” must always point in one direction simply because distance and time are intimately linked to one another. Since there is an upper bound for the propagation of information (the speed of light), non-local events will always be ordered as occuring in the past based on relative distance.

    There is a great non-attributed quote that states

    “There is simply no means in nature, other than energy transfer, by which information may be acquired across a distance.”

    It follows that there is always a certain amount of energy tied up in the transfer of information, and as the universe expands, more energy will be tied up in simple transfers of information.

    Since it takes time for that energy to transfer, it also means that the information is effectively stored until it finally interacts at its destination.

    Over time, more and more energy is simply tied up to facilitate the transfer of information (and is thus is lost to the “environment”)

    We begin seeing that there is intimate linkages between time, distance, energy, information, memory and entropy.

  • CarlN

    It is the low entropy start that “allows” the universe to use time symmetric dynamics. One is doomed if one tries to explain it the other way around.

    The initial condition and the dynamics were created together in a self consistent way. One cannot assume that the dynamics could determine the initial condition.

  • http://www.ticketpoint.de/billigfluege/billigfluege-asien/billigfluege-thailand.html Billigflüge

    Thanks, Sean
    excellent read
    “It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe”
    I totally agree with u
    keep up the good work I will be a regular reader of ur blog

  • http://theczardictates.blogspot.com CarlZ

    Sean — Excuse me if I’m being dense, but I don’t understand how that restatement of the argument makes any difference.

    Your version: “the statistical-fluctuation hypothesis makes a very strong prediction: namely, that every time we look somewhere in the universe we haven’t looked yet, we should see thermal equilibrium. And we don’t, so that hypothesis is falsified.”

    But really, isn’t what the theory is saying (CAPS for emphasis) more precisely this: “namely, that every time we look somewhere in the universe we haven’t looked yet, we should PROBABLY see thermal equilibrium. And EVERY TIME we don’t, that hypothesis BECOMES LESS AND LESS LIKELY.”?

    (Of course, even this makes a big assumption about conditional probability, i.e. that whatever fluctuation caused the parts of the universe we already looked in to be out of equilibrium also caused the other parts we haven’t looked in yet to also be out of equilibrium. But I’ll let that go for now.)

    If that’s not a more precise restatement of the argument, then I’m being dense and don’t get it at all. But if it *is* a more precise restatement, here’s the central point that I think proponents of this argument are missing:

    IT DOESN’T MATTER how preposterously unlikely the hypothesis becomes WHEN YOU ONLY RUN ONE TRIAL. And looking in lots of places in one universe still only counts as one trial, since this is fundamentally an argument about how the *whole universe* came to be so disordered.

    To put it another way: We know that this universe, as unlikely as it might be, did happen. I don’t know of any law of probability that allows one to work backwards from the outcome of ONE trial to inferences about the population that trial was drawn from… and it seems to my limited understanding, that’s exactly what this argument is trying to do.

    Consider this analogy: suppose you are told that a large sack contains white balls and black balls, but you don’t know how many of each. You pull one ball out, and it’s black. What can you say about the contents of the sack? A lot of people might say that it’s “unlikely” the sack contains only one black ball and hundreds of white ones, because then it would be unlikely that you would draw the one black ball. But that would be completely wrong: you just can’t reason backwards like that from one drawing. By contrast, if you drew lots of balls, you could state the probability of any given proportion of balls. But one ball alone tells you only one thing: the possibility that all the balls are white is eliminated.

    In the same way, any observation about our own universe really only tell us one thing: the possibility that low entropy states never happen is eliminated. It doesn’t tell us anything about the relative likelihood of the states.

    Again, I apologize if I’m just not getting it… but I really want to understand this, and in particular understand why this point about arguing backwards from one universe to the population of possible universes is wrong.

  • http://tyrannogenius.blogspot.com Neil B

    Well, I thought that since we “knew” that the universe began in a Bang (not “BB” to avoid confusion with Boltzmann Brains) we weren’t supposed to worry about potential fluctuations over incredible time scales. But in any case, the “initial conditions” of the universe are obviously relevant, and so one has any idea of what that a priori ought to have been (as I have argued, any particular “that’s just the way it is” violates the logical principle of sufficient reason, and leads many to modal realist type scenarios of every possible way to be actually existing.)

    Another factor though: if reality were fundamentally deterministic, the details of the outcome forever would at least have to follow from those initial conditions. But they can’t. Note for example, a free muon “prepared” like any other. But we don’t know when it will decay. Despite some pretensions involving things like “decoherence,” that decay moment is inexplicable. You don’t believe there is a “clockwork” inside the muon, unless maybe you consider strings, nor are there relevant environmental influences, right? But even then, that would mean we could in principle prepare a “five nanosecond muon” etc, and there would be contrasting structures to the decay patterns of various populations even if we couldn’t design specific hard outcomes. That is “real randomness” that pure math can’t even model the particular outcomes of. Hence we figure that whatever has some tiny chance of happening will happen often enough, given enough time or spatial extent. The really worrisome thing is, if the universe is “infinite” in extent then the time since Bang is not the issue, but having all those places to try every conceivable outcome – which could include Boltzmann Brains. (BTW, I note that many of the readers and commenters/wanna be commenters of this wide-appeal (via “Discover” mag) Blog are not professionals and cannot be expected to carefully fit right within perfect boundaries of on-topic propriety, coherence, and pithiness; just saying.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Greg Egan– That’s exactly the argument given by Hartle and Srednicki, and in ordinary circumstances it would be compelling, but this is one case where it doesn’t hold. The point is that we can conditionalize over absolutely everything we think we know about the current state of the universe — either, to be conservative, exactly over the condition of our individual brains and their purported memories, or, to be more liberal, over the complete macrostate of a universe with 100 billion galaxies etc. And then we can ask, given all of that, what are we likely to observe next, according to the Liouville measure on phase space compatible with that macrostate? The answer, as I say here and Feynman says above, is that we should see thermal equilibrium around every corner, and we don’t.

    Note the crucial difference here — I am not assuming that we are picked randomly among conscious beings in the universe. All I am assuming is that we are picked randomly within the set of macrostates that are identical to our own (including my memories of what presents I received for Xmas, etc.). You might be tempted to argue that this is an unwarranted assumption, but I promise you it’s not. For one thing, unlike in the uncontrolled multiverse case, here we know exactly what the measure is, just from conventional stat mech (or quantum mechanics, if you like). Second and more importantly, without being able to make that assumption, we deny ourselves the ability to make any probabilistic predictions whatsover in physics. If we can’t use the Liouville measure conditionalized on our macrostate, all of stat mech becomes completely useless. We are no longer allowed to say that ice cubes tend to melt in glasses of warm water, etc., because any such statement appeals to precisely that measure.

    CarlZ, I think the same reasoning should address your concerns. In particular, we do not know that “this universe actually happened.” What we know is that our current microstate is within some macrostate, defined either by just our brain or by the macrostate of the surrounding universe, depending on how much you want to grant (it really doesn’t matter). But, given the assumptions of the statistical-fluctuation hypothesis, it is overwhelmingly likely that our memories of the past and reconstructions of the previous history of the universe are completely false. We don’t simply have a universe and ask whether it would ever have occurred; we have some particular facts about the universe, and have to ask what the expectations are for other facts given that knowledge. In this hypothesis, those expectations are completely at odds with everything we see when we open our eyes.

    And when we say “less and less likely,” we’re talking overwhelming numbers — like, the probability that all the air molecules in the room will spontaneous move to one side in the next second, but much smaller than that. There is no operational difference between that kind of unlikeliness and simply “ruled out.”

  • http://decartes-einstein.blogspot.com/ Phil Warnell

    “In the long run in an open universe Hawking radiation should evaporate black holes. We are talking very long times now, longer than the decay of matter itself.”

    Yes and as in all heat transfer, rate is affected by the difference created by the magnitude of potential. That’s to say as the normal matter not contained in black holes decayed to energy, coupled with expansion; the relative temperature differential would increase, thereby aiding to hasten the process. The catch point is Hawking Radiation is suppose to be dependant on the quantum consequence of spontaneously co-created matter and antimatter; so then how would this compare with the classical thermodynamic model, which superficially doesn’t present in being much different? Which is to ask, would the rate (or potential) of such spontaneous creation stay the same or does it also hastens when the average temperature diminishes?

  • Ja Muller

    To tell you the truth, I’m not even sure that there is even a problem here. The Boltzmann brain problem seems equivalent to asking why does our universe have an entropy that is much lower than what we think its maximum possible value is. The problem is that some of the high entropy states are not allowed for other reasons. Just because a particular reaction increases the entropy of the universe doesn’t mean it must happen. So a state where the universe is a huge homogenous collection of photons is not allowed because of quantum mechanical conservation laws, GR ect.

  • http://tyrannogenius.blogspot.com Neil B

    BTW CarlZ, you are wrong about the black ball pulled from the sack. If the sack has 1,000 balls, then the chance of the combined circumstance of having drawn a black ball, and all the others being white, is 1/1000 if I reckon the method correctly. Sure, “it could happen” but we can still come up with expectations of chance. Note however also, that if the universe is large and just about everything happens, some observers are looking at their circumstances and saying, “this is so absurd, it has such a tiny chance of happening that there must be something else behind it” etc!

    Maybe I wrote confusingly last comment: I mean, that if there was a determinate (even if “just in principle”) process behind muon decay etc. we could see structure to (or learn to influence) the decay patterns (from preparation differences, environmental influences – which makes a hit against decoherence BTW IMHO). But we don’t, so the actual situation instead is “fundamental randomness” not even the kind you can seem to model by taking e.g. digits of roots of some number – that still gives the same results each time, unlike “identical muons.”

  • John R Ramsden

    capitalimperialistpig wrote “It is curious, though, that the cosmic billiard balls started out so neatly racked. Is it possible to understand this in any deep sense? Only if you know something about the rack – or the racker”

    Absolutely – All this Boltzmann Brain nonsense is a futile distraction. Perhaps what we need is more insight into how systems of *maximum* entropy can somehow combine or mix to produce a low entropy result, rather in the manner that fuel vapour and air create an explosive mixture, the more so the more uniformly (and thus with high entropy?) they are mixed.

  • http://www.pipelin.com/~lenornst/index.html Leonard Ornstein

    Sean:

    Scott Aaronson’s ‘axiom’ of the unique non-reusability of time (in comparison to “reusability” of the other space coordinates) is the kind of axiom to which I was alluding.

    If we begin with such an axiom, the Liouville measure as well as most of the foundations of physics will need some tweaking. Wouldn’t the reversibility of macro- and micro-physics then disappear, will BBs and an ‘origin of time’ – or a time of near-zero entropy then make any sense ?

  • http://www.gregegan.net/ Greg Egan

    Sean, when I reason probabilistically I don’t just condition on the observations I’ve made of the macrostates of various systems, I include half a dozen supplementary working assumptions. One of those assumptions is that we genuinely do live just a few billion years after a very low entropy Big Bang, as opposed to living in some kind of random fluctuation which merely keeps imitating that situation.

    I’m not just saying that for the sake of argument. I assume this for roughly the same reason that I assume the laws of physics will continue to hold, and that I am not actually Descartes trapped inside his own mind being deceived by the devil (or any of the tedious modern variants of that scenario). Not only is it psychologically more pleasant to assume these things, the payoff is enormous: making these assumptions is what lets us do science, instead of getting mired down in intractable philosophical issues.

    Obviously I’m going to make the same predictions as you about melting ice cubes and so on. But you seem to be suggesting that there’s something inconsistent, or intellectually flawed in some way, about failing to stick to the formula “condition on observed macrostates, and assume a random microstate using the Liouville measure” for everything from ice cubes to cosmology. That’s what I don’t accept. Probabilistic reasoning is a rigourous tool for making the best of uncertainty, but what “making the best” actually means is a matter of context. In gambling, in public health, etc., “making the best” of the indeterminacy we have to deal with isn’t hard to define; of course there are different political values that can be brought to bear in public health matters, but at least we can point to outcomes that make some reasonably well-defined group better off on average.

    So my complaint boils down to this: who is actually better off by reasoning in the way you suggest, and ruling out (or strongly weighting against) cosmological models that would allow Boltzmann brains in the future? Can you point to some tangible group who gets the benefit of this approach for dealing with uncertainty — in the way you could persuade medical researchers or quality control engineers to adopt their normal statistical methodologies?

    Maybe you feel this is just the intellectually correct thing to do, but apart from a certain quality of elegance and simplicity in assuming “we are in a random microstate” to the greatest degree possible, I honestly don’t see what compels this view.

    And the downside, from my point of view, is that people are publishing papers in which they claim to have put bounds on the cosmological constant, or to have deduced the “necessity” of various unobserved features of fundamental physics, based solely on the fact that we’ve so far found ourselves not to be Boltzmann brains. How far would you be happy to see this trend go? I’m not being snarky, I’m honestly curious — should someone share a Nobel prize for guesstimating the cosmological constant this way, if a group of observational astronomers later get a value in the same ballpark after a few decades of painstaking work?

  • Brian Mingus

    There is something I do not understand regarding entropy. It seems that P(galactic cluster) is greatly increased if you consider the prior, that is P(galactic cluster|universe), and further that P(earth|galactic cluster) is increased, and so is P(brain|earth). Now while many cosmologists reason backwards from here, wondering what the nature of order is, I can’t help but reason forward as well, and I do not see thermal equilibrium. All current evidence suggests that the probabilistic expansion I have described continues, that is, P(something much smarter than a brain|brain) is increased, and that we will in fact see something much smarter than a brain. Suppose that thing develops here on earth. One theory holds that a much smarter intelligence would not bother here on earth, which was just a catalyst, but would colonize another area of the universe in order to capitalize on its resources. Now consider the reductio of this argument, where most matter is optimally used and much time has elapsed. To me this is not a cold, dead, dark probabilistic flatness, but quite the opposite. To me this sounds like an optimization processes, much like evolution, or any number of algorithms from computational learning theory. The end result that I see is that things like brains are mysterious for a reason not mentioned by Boltzmann: they are mysterious because they have the ability to take the matter in the universe and use it create something that can take the matter in the universe and so on.

    Please point out any flaws in this reasoning! In particular, how is it possible to consider the evolution of the universe as a decrease in order when, since the dawn of life, all signs point to the (apparent, to me, perhaps naively) opposite?

  • Ben Button

    Greg Egan — asking, “who benefits” is a strange way to do science. But if you want to do it that way: the benefit is that Sean’s approach forces us to try to *explain* the extremely non-generic character of the early universe, by deducing it from string theory or whatever. The drawback of Boltzmann’s explanation is that it doesn’t lead anywhere. OK, the entropy of the early universe was low because of a fluctuation, right, and that tells us…..what?

    Also, how would we rule out any theory if we allow appeals to extremely improbable events [such as *not* finding equilibrium when we look in new places]? “Yes, my theory predicts that there should be easily observed processes violating CPT invariance all over the place, but the thing is, every time that happens, just by my damned bad luck, a small black hole nucleates and swallows the particles, then it evaporates and nobody noticed, yes, I know that seems unlikely, but it’s not *impossible* right?…..”

    No, nobody should get the Nobel “simply” by following a well-known idea to its conclusion. They should get tenure though. On the other hand, someone who shows that string theory/loop quantum gravity/whatever leads unambiguously to low-entropy initial conditions, and that this entails a precise prediction of the amount of CMB non-gaussianity which is then confirmed to five sigma by observations, well, now you are talking…..

  • http://www.gregegan.net/ Greg Egan

    Brian, evolution on Earth has involved some localised decreases in entropy, but only at the same time as a vastly greater increase in entropy in other systems.

    Ted Bunn has a nice recent analysis of this issue on his blog:

    http://blog.richmond.edu/physicsbunn/2008/12/07/entropy-and-evolution/

    http://blog.richmond.edu/physicsbunn/2008/12/11/more-on-evolution-and-entropy/

    If the universe becomes filled with complex lifeforms, that won’t in itself prevent it reaching thermal equilibrium eventually. As star die out, and other energy sources are exhausted, life will find it increasingly difficult to continue. Of course, it’s conceivable that we’re missing some scientific insight that civilisations in the distant future might exploit to circumvent this fate, but I believe the current best guess is that in the long run, everything dies.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Greg– To get the right answer within our observable universe, of course the right thing to do is not to only conditionalize on our macrostate, but also impose some sort of past hypothesis in the form of a low-entropy early condition. The question is, why?? You have a theory that says the overwhelming majority of macrostates of this form make a certain prediction for what comes next, and that prediction doesn’t come true, and instead of rejecting the theory you just place extra conditions on the space of allowed trajectories until you fit the data?

    Again: this absolutely is the right thing to do in the real world. But I would like to have a theory that predicts it should be that way, rather than just imposing it as an extra condition. It would be one thing to impose a condition on the evolution of the universe (which seems more physical, if still kind of ad hoc), but you’re imposing a condition on which moment in the universe’s history we find ourselves, which just seems completely arbitrary to me. I can’t stop you from doing that (because it does fit the data), but it seems to just be avoiding a really useful clue about fundamental physics that nature is trying to give us. (I think Ben is on the same track, but I don’t want to put words in his mouth.)

    Brian– I don’t think that thinking about brains and biology is really useful in this context, despite the temptation. It’s entropy that is important, and its behavior is straightforwardly predicted by conventional statistical mechanics. On the other hand, it’s true that “complexity” (whatever that means) has gone up since the Big Bang. Understanding that is a very important problem, but a little tangential to this one.

  • CarlN

    I’ll say it again but no more ;-)

    Don’t confuse dynamics with initial conditions. It is so much easier to set up low entropy initial conditions than high ones. So that is what we should expect, and that is what we see.

  • http://www.gregegan.net/ Greg Egan

    Ben Button, you’ve misunderstood me if you think I’m championing anything that requires less explanatory work from scientists than Sean’s approach. On the contrary, what I’m trying to do is rule out premature pseudo-explanations, which are predicated on the assumption that we can use typicality for extra leverage in entire domains where it has not been empirically tested.

    When someone disputes the usual assumptions of thermodynamics for, say, an Earth-bound system like a mixture of ice and water, it’s very easy to force them to see the error of their ways. If Alice thinks an ice cube is overwhelmingly likely to dissolve in ten times its mass of boiling water, while Bob just flips a coin and says “Heads the ice melts, tails the water freezes”, she can make him look foolish, or clean him out in a series of bets, very quickly. So the assumptions underlying her predictions aren’t just down to conceptual elegance: they’ve been tested, and found to work.

    All that I’m arguing is that we have no right to push this assumption of typicality far beyond the domain where it’s been established. It’s OK to keep it as a working hypothesis, but we need to be honest about that. It’s no good saying “use typicality everywhere, or you have no right to use it anywhere”. I don’t lose my Liouville-measure privileges for ice cubes simply by refraining from taking exactly the same approach to cosmology that I take to tabletop thermodynamics.

    I don’t know why you imagine any of this leads to less motivation to derive the low entropy initial conditions of the universe from first principles. Nothing I said championed the notion that we came from a meaningless low-entropy fluctuation; what I said was that I assumed that hypothesis was false, but I’m waiting for someone to do the real work required to turn my assumption into a well-founded belief, instead of waving the typicality wand and pretending that they already know, with near-certainty, that there never have been and never will be Boltzmann brains, anywhere, ever.

    Asking “who benefits” is not a strange way to do science, it’s asking for a precise statement of what we’re aiming to optimise with a particular strategy for dealing with a lack of certainty. It’s not about grubby materialistic concerns, it’s just asking for a tangible illustration of what someone is claiming, when they claim to have probabilistic knowledge. If you tell me “I’ve deduced X, with a high degree of certainty”, but you can’t actually demonstrate X directly, what does that mean? Maybe it just means you’re honestly satisfied with your own reasoning process … but why on Earth should anyone else believe you? With Alice vs Bob and the ice cubes, we can show why we should believe Alice, by showing her win bets against Bob. When someone says “This vaccine saves 100,000 lives for every life it costs in side effects”, we can look at the data and see if that’s really true. But when a cosmologist says “I’m 99.9999999% sure that there will never be Boltzmann brains, anywhere, ever” … if there is any actual content to that claim, it ought to be possible to point to a scenario where people who doubt it are shown to be spectacularly wrong.

    As I mentioned earlier, so far the only relevant scenario I can imagine is the tautological one: if every single being in the history of the universe assumes typicality, then whatever the actual majority is, they will have correctly guessed their own majority status. But that’s vacuous. Why would we play that game and mistake it for science?

    I don’t mind if people tentatively, and explicitly, assume typicality. What I do mind is people treating this assumption as a substitute for hard evidence. To be clear, I don’t think Sean has done that — I think he’s been very careful in stating his assumptions explicitly.

  • Fubaris

    Hypothetical situation. The universe kicks into reverse and starts going backwards. For some reason every particle in the universe instantaneously reverses course. And also space begins contracting instead of expanding. Everything in the universe hits a rubberwall and bounces back 180 degrees.

    So now instead of expanding, everything is on an exact “rewind” mode, and we’re headed back to the “Big Bang”.

    The laws of physics work the same in both directions…if you solve them forward in time, you can take your answers, reverse the equations and get your starting values, right?

    Okay, so everything has reversed direction. The actual reversal process is, of course, impossible. But after everything reverses, everything just plays out by the normal laws of physics. Only that one instant of reversal breaks the laws of physics.

    TIME is still moving forward in the same direction as before. We didn’t reverse time. We just reversed the direction of every particle.

    So, now photons and neutrinos no longer shoot away from the sun – instead now they shoot towards the sun, which when the photons and the neutrinos and gamma rays hit helium atoms, the helium atoms split back into individual hydrogen atoms, and absorb some energy in the process. Again, no physical laws are broken, and time is moving forward.

    Now, back on earth, everything is playing out in reverse as well. You breath in carbon dioxide and absorb heat from your surroundings and use the heat to break the carbon dioxide into carbon and oxygen. You exhale the oxygen, and you turn the carbon into sugars, which you eventually return to your digestive track where it’s reconstituted into food, which you regurgitate onto your fork and place it back onto your plate.

    Okay. So, still no physical laws broken. Entropy is decreasing, but that’s not impossible, no laws of physics are being broken.

    In this case, it must happen because we perfectly reversed the trajectory of every particle in the universe.

    NOW. Your brain is also working backwards. But exactly backwards from before. Every thought that you had yesterday, you will have again tomorrow, in reverse. You will unthink it.

    My question is, what would you experience in this case? What would it be like to live in this universe where time is still going forward, but where all particles are retracing their steps precisely?

    The laws of phsyics are still working exactly as before, but because all particle trajectories were perfectly reversed, everything is rolling back towards the big bang.

    In my opinion, we wouldn’t notice any difference. We would NOT experience the universe moving in reverse, we would still experience it moving forward exactly as we do now…we would still see the universe as expanding even though it was contracting, we would still see the sun giving off light and energy even though it was absorbing both. In other words, we would still see a universe with increasing entropy even though we actually would live in a universe with decreasing entropy.

    And why would that be the case? Because our mental states determine what is the past for us and what is the future. There is no “external arrow of time”. The arrow of time is internal. The past is the past because we remember it and because the neurons of our brains tell us that it has already happened to us. The future is the future because it’s unknown, and because the neurons of our brains tell us that it will happen to us soon.

    If there is an external arrow of time, it is irrelevant, because it doesn’t affect the way we perceive time. Our internal mental state at any given instant determines what is the future and what is the past for us.

    In fact, you could run the universe forwards and backwards as many times as you wanted like this. We would never notice anything. We would always percieve increasing entropy. For us, time would always move forward, never backwards.

    My point being, as always, that our experience of reality is always entirely dependent on our brain state. We can’t know ANYTHING about the universe that is not represented in the information of our brain state at any given instant.

    Forwards or backwards, it’s all just particles moving around, assuming various configurations, some of which give rise to consciousness.

  • http://www.gregegan.net/ Greg Egan

    Ben Button wrote:

    “Yes, my theory predicts that there should be easily observed processes violating CPT invariance all over the place, but the thing is, every time that happens, just by my damned bad luck, a small black hole nucleates and swallows the particles, then it evaporates and nobody noticed, yes, I know that seems unlikely, but it’s not *impossible* right?…..”

    So you’re trying to compare (A) a completely arbitrary made-up theory for which there is no evidence whatsoever, with (B) the widely accepted proposal that the universe started with a low entropy Big Bang and will eventually undergo heat death. (B) is a plausible consequence of all of physics and cosmology as it’s presently understood. That might well change, but so far it hasn’t.

    Assuming theory (B), by the way, it’s close to inevitable that some life will arise prior to the heat death. Saying “we are that kind of life, rather than the later kind” is not a bizarre implausibility; it’s close to certain that it’s true for someone. If I win one lottery, I should not be amazed: someone had to win, and I have a perfectly good theory of lotteries, consistent with everything else I know about the world, that allows for one winner and millions of losers. If I win the lottery twice, that’s when my theory is massively demoted, because it does not predict that a double winner is inevitable. Claiming that we are pre-heat-death life in a universe with a massive amount of post-heat-death life is just one lottery win, not two.

  • acudoc

    Arguments such as these are one of the reasons I am becoming more interested in engineering than physics!

  • Fred

    Again, maybe im missing something really obvious but after perusing some of the literature (Page, Hartle, Susskind and others) and reading the comments I still fail to see where the problem is.

    There’s something a little fishy about arguing that impossing a low entropy initial condition right after the beginning of the universe is somehow ‘extra’ baggage to a theory, and that it should be explained or derived instead. Well, this extra baggage is as Feynman says the reason that we perceive the human notion of time (past/present/ cause/effect) in the first place, and usually we take that as an axiom when building final theories, and not a theorem. Its conceivable that theres a roundabout way of deriving it starting from different axioms, but I doubt it will teach us any new physics.

    I also fail to see why the anthropic principle is not clearly a valid argument here. If you had very high entropy in the beginning of the (assume for now finite and young) universe after inflation, stars would be unfavored to form and so on. The only way out of that is to argue like Boltzmann did, but then I think everyone agrees that leads to inconsistent observations. So again, where’s the problem?

  • JimV

    Sean, thanks very much for the reply. However, I am still having difficulty from how we get from entropy to probability. There have been many comments since and perhaps that has been answered, but from a brief skimming I don’t one that I find compelling. The issue I still have (which has since been raised in the comments also) is that, while I agree higher entropy fluctuations are more probable than lower entropy fluctuations, most of the latter will not give rise to sentient observers capable of assessing the Boltzmann paradox, so it does not seem fair to me to count all of them when assessing relative probabilities of how we ended up in the sort of universe we see. (I am not even sure a Boltzmann Brain would count – can it build telescopes and make observations, or does it only have the illusion of doing so? Nor would a single solar system be likely to do so, based on current observations.)

    So it still feels somewhat like the Lottery-winner Fallacy (a standard creationist argument) to me. I must agree that getting here from a statistical fluctuation is an unlikely event, so it would be neater to find a more likely way, but still do not feel we have enough data to assess how big the lottery is or how long it has been running, so as to rule out the possibility of our winning it by chance (if this is winning) by some sort of “explanatory filter”. Your invocation of Occam’s Razor carries some weight as a way to guide our thinking and further research, but as you know it is just a methodological guide, not a rule of logic.

    (I of course am not expecting a further reply at this late point, but just wanted to clarify my position. )

  • Low Math, Meekly Interacting

    I’m just wondering if my reasoning is wrong-headed here. I’ll be pretty darn happy if they are, at any rate.

    Anyway, say the universe keeps expanding to the point of heat death, as currently expected, and no matter where you are, there’s nothing to see within any causal horizon but an ultra-cool bath of Hawking radiation. Forever. Except, occasionally, since we’ve got an infinite amount of time to wait for them, Bolzmann brains, even Boltzmann-brain-eating zombies, will spontaneously pop out of the vacuum. It’s absurdly unlikely, but, again, because we’ve got literally forever to consider, all events, no matter how improbable, eventually will happen. They must. Just as a universe like ours, with a low-entropy past, and a high-entropy future, as ridiculously stupendously meganormously unlikely as even THAT is, must happen in the end. And again, too. In fact, there must eventually be a universe that buds off of our universe that has the same history as our universe, in which my Doppelgänger is asking this very question on a blog just like this one. Actually, there must be an infinity of these.

    In sum: We’re talking about probabilities in a cosmic model that, if I’m not mistaken, allows for infinity, somewhere in the past or the future, or both. “Eternal” inflation, right? In which case, I’m not sure why Boltzmann-brainian scenerios of any kind that apply to the megaverse, using simply the rules of that model, don’t all happen if the proposed hypothesis does not blatantly violate the laws of that model. Unless ours is the first universe, or we can show somehow that there’s some manageably finite number of universes in our past from which ours has budded, is it not staggeringly likely that we ARE a statistical fluctuation? Mustn’t we be, because, with infinite time to wait for such things, the the fact that our existence as the consequence of a statistical fluctuation approaches inevitability, no matter what the probability?

    I just wonder if, with infinities lurking somewhere, questions of cosmological origins and so forth aren’t plagued with these “anything goes” consequences. How can they not be?

  • http://www.theory.caltech.edu/~preskill John Preskill

    There is a discussion of the same theme (seeking the foundations of statistical physics in the initial conditions of the universe) in Sec. 2.1 of The Feynman Lectures on Gravitation. This lecture was delivered in fall of 1962. Remarkably, Feynman was teaching the sophomore year of The Feynman Lectures on Physics concurrently with this graduate-level course on gravitation. The freshman lecture on the foundations of thermodynamics, quoted by Sean, was just a few months earlier, in spring on 1962. In the concluding paragraph of Sec. 2.1, Feynman says that “The question is how, in quantum mechanics, to describe the idea that the state of the universe in the past was something special.”

    I had several discussions with Feynman during the early 1980s about inflationary cosmology. He was interested in the topic, but always raised the same objection: How does one justify appealing to quantum field theory on a semi-classical background spacetime? His point was that one needs to explain why the initial state was special not just for the matter but also for the gravitational degrees of freedom.

    I suspect that recognizing that the justification of the second law is really a problem in quantum cosmology was an unusual insight in 1962.

    Incidentally, in a later lecture in the same course Feynman argues for Omega=1 based on naturalness: “the gravitational energy is of the same order as the kinetic energy of the expansion — this to me suggests that the average density must be very nearly the critical density everywhere.”

    Kip Thorne and I wrote a foreword for the Lectures on Gravitation in 1995, pointing out these and other insights from the lectures.

  • Low Math, Meekly Interacting

    And I mean to say, I think I get what the BB paradox is about, I think I kind of get what Feynman is saying, I just don’t understand why infinity doesn’t trash that logic, or any logic, for that matter. Unless infinite time frames can be eliminated, I just don’t see how one can forbid anything, even if probability argues very strongly against it. Isn’t this the nature of singularities, that the rules break down? Are we assuming we know the rules avoid this breakdown? Is the argument against the paradox truly so strong that it can suppress these consequences of infinity on its own? Because I don’t see myself how the incredibly huge probability that there is some better explanation for the low-entropy state we originated from “cancels out” the absurd improbability of the fluctuation hypothesis, given that infinity forbids nothing. We just happen to be an unlikely consequence in an infinite array of consequences, all of which presumably “exist”. So what, then?

  • http://www.gregegan.net/ Greg Egan

    The argument put by people like Page is that we should conclude the universe will decay on a time scale rapid enough to prevent BBs from ever forming. But if we tabulate the consequences of different strategies for reasoning about our observations, I don’t see any significant advantage for that approach.

    To keep things simple, assume two possible universes. Both universes start with a low-entropy Big Bang and contain exactly 1 genuine, pre-heat-death, non-fluctuated region that matches everything we’ve ever observed.

    — Universe A decays before any Boltzmann brains form.
    — Universe B does not decay until it has experienced N thermal fluctuations that match everything we’ve ever observed, but which reveal their nature as fluctuations on the next observation, along with M fluctuations that also match everything we’ve ever observed, but continue to look like non-equilibrium systems even on the next observation. Assume that N is very large, while M will, of course, be vastly smaller than N.

    Consider two possible strategies for dealing with our next observation of part of our surroundings, the system P:

    — Strategy 1 says if P is in thermal equilibrium, conclude you’re in Universe B, and if P is not in thermal equilibrium, conclude you’re in Universe A.
    — Strategy 2 says if P is in thermal equilibrium, conclude you’re in Universe B, and if P is not in thermal equilibrium, remain agnostic about which universe you’re in.

    If the universe is Universe A:

    Strategy 1 leads to 1 civilisation correctly concluding they’re in Universe A.
    Strategy 2 leads to 1 civilisation remaining agnostic.

    If the universe is Universe B:

    Strategy 1 leads to:
    — N civilisations correctly concluding they’re in Universe B;
    — M+1 civilisations falsely concluding they’re in Universe A.

    Strategy 2 leads to:
    — N civilisations correctly concluding they’re in Universe B;
    — M+1 civilisations remaining agnostic.

    So there are pros and cons for both, but certainly no spectacular advantage for Strategy 1.

    N being large is a red herring; it makes no difference to the relative advantages. The fact that 1/N is tiny, and that the pre-heat-death civilisation in Universe B is hugely atypical, is beside the point.

  • Ben Button

    I just took delivery of a truckload of 1 million quarters [American 25 cent coins]. They were just poured out of the truck any old way. To my surprise, it turned out that *every single one of them* was lying there with heads up.

    Question: should I have been surprised?

    I ordered another truckload to be delivered tomorrow, again consisting of one million quarters.

    Any predictions as to the number of heads that will turn up? If I get one million heads again, should I be surprised? Should I seek for an explanation, or just accept it as one of those things that are bound to happen now and then?

  • Ben Button

    Prof Preskill:
    Thanks very much indeed for your comment. [By the way, I hope Prof Preskill won’t mind my pointing out that the full text of his and Thorne’s preface can be found easily by googling.]

    These may seem like strange questions, but just to be perfectly clear, would you agree with the following statements?

    [a] Feynman believed that all manifestations of the second law of thermodynamics are ultimately due to the special initial conditions at the beginning of time [ie, that all such manifestations are ultimately cosmological in nature].

    [b] Feynman believed that some as-yet-unknown physical law was responsible for the special initial conditions.

    Thanks!

  • http://www.gregegan.net/ Greg Egan

    Ben, I don’t know what you imagine your truck of magic quarters is analogous to, so you might want to engage directly with something I’ve actually said if you want to dispute it.

  • http://www.gregegan.net/ Greg Egan

    Sean wrote:

    The point is that we can conditionalize over absolutely everything we think we know about the current state of the universe — either, to be conservative, exactly over the condition of our individual brains and their purported memories, or, to be more liberal, over the complete macrostate of a universe with 100 billion galaxies etc. And then we can ask, given all of that, what are we likely to observe next, according to the Liouville measure on phase space compatible with that macrostate?

    But isn’t that implicitly assuming that there’s only one experimenter, in one particular (and hence arguably typical) microstate? In the Boltzmann brain scenario, though, the whole idea is that there are a vast number of observers who, at least initially, think they’re in broadly similar situations: it looks to them as if there was a low entropy Big Bang 14 billion years ago.

    So long as at least some of that vast number really are 14 billion years or so after the Big Bang, the proportion isn’t relevant: at the next observation, those who genuinely belong to the early universe will see a system far from equilibrium. Sure, the vast majority will see equilibrium instead, and they will correctly conclude that they arose from thermal fluctuations, but the existence of a non-zero minority who see disequilibrium remains a near certainty. (I say “a near certainty” only because it depends on what you think the probability is that life can evolve prior to heat death. I think most people would rate that as being quite high.)

    Conditioned on everything we think we know about the current state of the universe, the probability of a single typical observer in a universe containing Boltzmann brains seeing disequilibrium at their next observation is minuscule. But it’s that word “single” that smuggles in the selection fallacy that Hartle and Srednicki warned against.

  • Paul Stankus

    Hi Sean, et.al.,

    I appreciate this great discussion, but I’m hung up on an earlier point and I hope someone can help me out.

    When I read a phrase like “The problem is that the early universe looks very unnaturally ordered to us” (from a reply at 12:23 12/29 by Sean) I have trouble matching that to any actual picture I have of the early universe. In the standard hot big bang model the early Universe is filled with a nearly-relativistic, nearly-ideal gas of particles in very good thermal equilibrium; in what way can that state be called “unnaturally ordered”? To the contrary, if we pick any particular epoch when the Universe has some particular energy density, then it looks to me as though any fiducial volume is actually at _maximum_ entropy for its energy density.

    I don’t mean to dispute the general point made now by you, Penrose, Feynman and other luminaries, that the entropy of today’s Universe is increasing and so must have been lower in earlier times. But it’s also clear that the early thermal universe — which is actually most of the Universe’s history, if we count time logarithmically — was, given its global constraints, at maximum entropy. So this pushes the question of the “specialness” or “orderliness” of the early Universe back to why it had those constraints. As I count them there are basically two constraints which specify the early thermal universe in the standard model: (1) The chemical potentials for all massive species of particles are negligibly small, and (2) The metric is smooth, ie there are no black holes and not a lot of energy present in gravity waves.

    The second of these is mentioned explicitly by both Feynman and Penrose, for example, and involves many interesting questions: Why didn’t primordial black holes form at a very early epoch? What would a fully equilibrated gas of gravitons look like? etc. But I’m actually more interested in the first of these, which suggests a possibly deep connection between the arrow of time and baryogenesis. The entropy per co-moving volume is (nearly) constant during the Universe’s early thermal phase; it’s only after matter domination begins that entropy can increase through gravity and structure formation. But this Universe wouldn’t have gone over to matter domination without a finite chemical potential for a massive particle species — ie baryo-(and lepto-)genesis. So it seems to me that baryogenesis (or its equivalent in other Universes) is kind of a “gateway” for entropy growth. What do you think? does this point ever come up in arrow of time discussions?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Paul– I think you are doing exactly what, as John Preskill mentioned, Feynman warned us not to do — separating out the gravitational degrees of freedom from the matter degrees of freedom. That might be convenient for how we think about it, but there’s no law of nature which says “the entropy of the matter degrees of freedom in a closed system will tend to increase.” Nature doesn’t distinguish between matter and gravitation, as far as entropy is concerned.

    The “global constraint” of the universe being smooth (and also very dense) is not a constraint at all — it’s a fact about the configuration of the early universe, which needs to be explained. A constraint is something that stays constrained, not just a temporary feature of a configuration. It’s like saying “sure, the gas in that box is all on one side, but I’ll just call that a global constraint.” The gravitational state of the early universe had a tremendously low entropy, and that’s what we’re all trying to understand.

    I’m not sure about the chemical potential business. For one thing, in the real world, most of the matter is dark matter, so baryogenesis is not at all necessary for matter domination. For another, even if it weren’t for dark matter, baryons would eventually have dominated if it weren’t for the cosmological constant.

    Greg– Yes, I am certainly appealing to “typicality” in that extremely weak sense. Namely, that once I specify everything I know about the macrostate of the universe, I assume our state is typical according to the Liouville measure over microstates compatible with that macrostate. But again, that’s just what we do all the time in everyday stat mech, when we try to predict the future. (When we try to reconstruct the past it’s a different story, where the past hypothesis comes in.)

    I think Hartle and Srednicki were very correct to warn against granting ourselves too much leverage over what the universe is doing over very large scales by assuming that we are a “typical” kind of observer. However, I think it’s going too far to argue that we can’t assume our microstate is a typical element of our macrostate. There may be some justification for doing that, but I don’t think the H&S argument is enough. At face value, not allowing us to make that assumption prevents us from doing stat mech altogether, which I think is what Ben is getting at. Every time we observed what appeared to be a statistically unlikely event, we would be instructed to shrug and say “Well, it must have happened somewhere in the universe at some point in time,” rather than suspecting there were some dynamics behind it and using that as a clue to learn something new.

  • Aaron Bergman

    Nature doesn’t distinguish between matter and gravitation, as far as entropy is concerned.

    How can you say that when, as best I can tell, there is no global definition of entropy that includes gravitation?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    The definition of entropy is S = k log W, just like it’s engraved on Boltzmann’s tombstone. It’s certainly true that we don’t know what the structure of the space of microstates is, so we have trouble *calculating* the entropy for some kind of spacetime, but the great thing about stat mech is that it doesn’t care about the details of the state space or the Hamiltonian.

  • Aaron Bergman

    Stat mech works best when there’s a thermodynamic limit. And that certainly does care about the details of the state space and the hamiltonian.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I’m not sure what you mean. The thermodynamic limit is when there are a large number of states. That’s not really a problem for the whole universe.

    There are many things we don’t know about gravitational entropy, but we know more than enough to say “the entropy of the early universe was small.”

  • http://golem.ph.utexas.edu/~distler/blog/ Jacques Distler

    The thermodynamic limit is when there are a large number of states.

    I think that Aaron is referring to the existence of (the possibility of) thermodynamic equilibrium. That, to say the least, is problematic in gravitational systems.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I suppose it is problematic, although I would argue that de Sitter is the correct equilibrium state in the presence of a positive cosmological constant. Regardless, non-equilibrium statistical mechanics certainly exists, and nothing stops me from calculating the entropy using Boltzmann’s formula. (And certainly entropy exists, and tends to increase in a closed system.)

  • http://golem.ph.utexas.edu/~distler/blog/ Jacques Distler

    I would argue that de Sitter is the correct equilibrium state in the presence of a positive cosmological constant.

    Classically, that is certainly false.

    Quantum mechanically … who know? Nobody understands quantum gravity in de Sitter space.

    Regardless, non-equilibrium statistical mechanics certainly exists…

    I’m hardly an expert on the subject, but I don’t see how (what little I know of ) the existing machinery of non-equilibrium stat mech helps you here.

  • http://www.gregegan.net/ Greg Egan

    Sean: OK — and thanks for your patience.

    I guess my problem was that I’d tacitly assumed that we’ve observed so much disequilibrium already that we’re reasonably entitled to frame the hypothesis “We live 14 billion years after a genuine low-entropy Big Bang, in a universe that might or might not have 10^100 Boltzmann brains in its far future”. Once you allow that hypothesis as reasonable, the further evidence we gather just keeps on supporting it.

    But I think I can see now why you don’t believe I’m entitled to start from that point — or at least, you’re saying I ought to give this hypothesis a stupendously low prior probability.

  • Ben Button

    Greg Egan: Yes, as Sean said, what I am driving at is that we have to proceed in this way if we want to do stat mech at all.

    Aaron Bergman: You might find it helpful, as I did, to look at this:

    http://arxiv.org/abs/physics/0402040

    The question of the definition of gravitational entropy, while interesting, is not very relevant really.

    To me, the really interesting point is this. In my reading of Feynman, he claims that there is something *absolutely fundamental* that we don’t understand about the early universe. Question: by not understanding this, and, worse, by not even recognising that there is a problem here, are we in grave danger of talking complete nonsense when we discuss the early universe? Can we really get away with ignoring a major new law of physics?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I would argue that de Sitter is the correct equilibrium state in the presence of a positive cosmological constant.

    Classically, that is certainly false.

    Classically, the cosmic no-hair theorem implies it is pretty darn close to true. You could be unlucky enough to fall into a black hole, but if you manage to avoid that and wait around long enough, your local patch of universe will approach de Sitter.

    Quantum mechanically, the black holes will evaporate, so you don’t even have to worry about that. Of course de Sitter might then not be completely stable; in fact I hope it’s not. But it persists for a long time, at least.

    There are many things we don’t understand about quantum gravity, but that doesn’t seem like a good reason to completely ignore features of semiclassical gravity that seem pretty robust. I can’t imagine any new insight that would make “Nature doesn’t distinguish between matter and gravitation, as far as entropy is concerned” a false statement, but I could certainly be wrong. But “we don’t understand everything” isn’t enough to prevent me from trying to move forward.

  • http://tyrannogenius.blogspot.com Neil B

    If I could put forth something relating to the basic conceptualization of, “is flow of time relative”: I think it isn’t. Consider a world where events are “moving backwards” and something intervenes in the flow of those events. The results are not symmetrical with what we expect for our universe. Consider yourself an “outsider” that is not part of the reversing time flow. A bullet that (to the backwards universe) was fired “out of” a gun is now approaching that gun (in your reckoning) to go back into it. You push the bullet, or the gun, out of the way. Now what? It is imaginable in a world of normal time flow, what really happens if I fire a gun and then push the bullet out of the way later as commonly understood in time sequence. But if we allow the intervention to be conceived as happening in a world where time is running backwards the result is absurd: the bullet now misses the gun and does, what? It’s re-reversed behavior would be absurd, it would have to spring out of e.g. a tree that was behind the shooter, etc. With this distinction between whether past or future is affected by an intervention in the chain of events, how can time-reversal be merely relative?

  • Paul Stankus

    Hi Sean —

    Thanks for your prompt reply, which I think does provide a clear answer: the early thermal Universe was disordered in all the matter/radiation degrees of freedom, but highly ordered in its gravitational degrees of freedom. I agree that why the latter is true is the interesting question (the word “constraint” to describe a smooth metric in my comment was perhaps unfortunate; all I really meant was that given a smooth metric the matter/radiation had reached equilibrium, not that a smooth metric was a constant condition). This does bring up the next natural question, though: what would thermal equilibrium with gravitational degrees of freedom included look like?

    Presumably the answer to this question is equivalent to identifying (a prelude to counting) the microstates of gravity, which you say is a hard problem and so one shouldn’t expect a simple answer. But let me give it a partial shot, with some intro-level GR, and it may then be instructive for you to point out where I’ve gone wrong.

    Perturbations from a flat Minkowski metric (or from a Robertson-Walker metric on scales much smaller than the Hubble radius) can, according to Penrose, be divided into two types: volume-changing and volume-preserving. The former are in general tied to the matter/energy distribution and so are not really independent degrees of freedom; the exception among volume-changing perturbations is black holes, which can be entirely vacuum (outside the singularity). The volume-preserving perturbations are basically gravity waves, which can exist in vacuum or in non-empty space. So the independent gravitational degrees of freedom that Feynman warns us not to forget can, crudely, be categorized as black holes and gravity waves.

    Looking at gravity waves first, is there any fundamental error in decomposing gravity waves into gravitons? ie a configuration of spin-2 particles? If that’s OK then I should be able to count this class of gravity’s microstates since I know how to count particle states, at least in thermal equilibrium. At any given temperature the energy and entropy densities in the early Universe are directly proportional to the number of particle types with mass less than the temperature, counting each combination of spin, color, flavor, etc. as an independent type. At temperatures above the QCD transition this number is on the order of 100 for the particles in the standard model. If we simply naively add a parallel, ideal gas of spin-2 gravitons then it raises the effective number of types by 5; and so if those gravitons are absent then their “missing gravitational entropy” in that early epoch is something like a 5% correction to what’s in matter/energy — not much to get excited about if you ask me, and certainly not the difference between the Universe being “very ordered” versus “very disordered”.

    This, then, leaves black holes as the main, significant expression of independent gravitational degrees of freedom in thermal equilibrium. And so, if there are no great errors in the foregoing then I would conclude that your general question “Why is the early Universe so highly ordered in the gravitational sector?” really boils down to the more specific question “Why wasn’t the early Universe, and why isn’t our present-day Universe, dominated by black holes?” Do you think that’s a fair conclusion? if not, then what have I missed?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Paul– I can only suggest that you have a look at hep-th/0410270 !

    (I think a lot of what you say is right, but reality is a bit more complicated, because the existence of gravity changes what counts as “equilibrium” just in the matter sector, due to the Jeans instability.)

  • http://golem.ph.utexas.edu/~distler/blog/ Jacques Distler

    You could be unlucky enough to fall into a black hole, but if you manage to avoid that and wait around long enough, your local patch of universe will approach de Sitter.

    Since I’m not planning on leaving our galaxy (much less our local cluster, which is also, I believe, a gravitationally bound system), my local patch of the universe will never approach de Sitter.

  • Fubaris

    Neil B: Hey! Somebody actually read my post! At least…I assume it’s my post that you’re referring to…

    Optimistically assuming that you were:

    So the particles all reverse trajectory, and then something causes the gun to move…which wasn’t in the forward version of events, but now is in the reverse version of events.

    When I first started to write this reply, I started working back through all the consequences. For example, prior to being struck by the bullet, the tree was going about it’s business absorbing oxygen and using that to break sugars into carbon dioxide and water, plus some photons which it shoots off towards the sun which is absorbing them so as to break helium atoms into hydrogen atoms.

    BUT, then it occured to me…this world is in a very fragile, very special state of decreasing entropy. As long as nothing disturbed the process, it would proceed along fine with entropy decreasing back towards a bing-bang event (assuming deterministic physics).

    BUT, the force that moved the gun has now disturbed the extremely delicate state of affairs.

    By moving the gun you’ve opened the door for entropy to begin increasing again. And it will start to increase immediately. Their whole world will now eventually disintegrate into chaos.

    It was a very finely tuned situation to start with, and you untuned it with your gun push. Way to go Neil. You killed them all.

    As to how this would be percieved by the people who are living in reverse…who knows. As the chaos spreading out from the bullet-tree collision starts to affect their brains…probably brief mass confusion followed rapidly by oblivion.

    But my original post was really more about how we percieve time and reality than anything else. It may have been a tiny bit off topic. Oops.

    THOUGH, there’s no reason why that scenario couldn’t play out during the “entropy decreasing” part of a Boltzmann-style statistical fluctuation of entropy, which would remove my “trajectory reversal” gimmick and make the post more on-topic.

  • Fubaris

    “As to how this would be percieved by the people who are living in reverse…who knows. As the chaos spreading out from the bullet-tree collision starts to affect their brains…probably brief mass confusion followed rapidly by oblivion.”

    Actually it will be perceived by them as:

    Oblivion, followed by brief disorientation, followed by them assuming a full coherent set of memories of a non-existent past, and then them proceding on with their lives with no recollection that anything strange had ever happened, until they hit the trajectory reversal point (in the original post) or the beginning of the low entropy statistical fluctuation (in the revised version), at which point they return to oblivion.

    From oblivion, to oblivion. The inevitable lot of all mortals.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Jacques, our galaxy is certainly not a stable system. Even at the level of Newtonian galactic dynamics a la Binney & Tremaine, the galaxy will continually eject some stars, as others become more tightly bound and eventually fall into the black hole. From there, see previous flow chart.

    Suggesting an isolated collapsed object like a planet or a white dwarf is a better bet. But ultimately those only resist gravitational collapse through the miracle of quantum mechanics. And once you have quantum mechanics, there is some amplitude for tunneling to a black hole.

    de Sitter is your best bet, believe me.

  • Aaron Bergman

    I think that Aaron is referring to the existence of (the possibility of) thermodynamic equilibrium.

    At least when I took stat mech from Elliot Lieb way back when, the existence of the thermodynamic limit was the statement that, in the large various things limit, the usual thermodynamic quantities exist and obey the expected properties. The existence of this limit for interacting systems is very nontrivial and fails for gravitational systems. This is different, I believe, from the stability of matter which is, of course, another interesting nontrivial result (although I seem to recall that the tempering of the Coulomb force is an essential part of each proof.)

    As for non-equilibrium stat mech, last I checked at least, it is a mess including such fun things as local entropies that, when integrated, do not give the correct global entropy.

    Anyways, this is my yearly objection to talking about the entropy for the universe — everyone can go back to talking about it now, and I’ll be quiet.

  • http://tyrannogenius.blogspot.com Neil B

    Fubaris, I first imagined that thought experiment (intervention in a time-reversed world) literally decades ago while reading “One, two, three … Infinity” by George Gamow. It is a perplexing and challenging idea for anyone who wants to consider time flow a purely relative matter. An intervention changes “the continued past” of the time reversed world and “the continued future” of a normally progressing world. Sure, even the intervention itself wouldn’t be modeled the same way in both worlds, but that isn’t the point. The point is, the intervention can be done in the TRW and the effects work backward to ruin the rational structure of the supposed past of that world. How can a TRW be vulnerable, even in principle, to such an action if not really different in principle from a normal world? This question is indeed relevant to the idea of “thermodynamic reversibility” since the latter tends to a statistical challenge to TRWs. IOW, they are said to have a tiny change of maintaining the reverse flow, but if anything “went wrong” it all falls apart. However, we consider our universe “robust” and interventions would only affect the “true future.” That makes common sense, but violates the supposed inherent physical equivalency of time-reversed processes.

    I don’t think the effects would be what you imagine. An intervention in a TRW does not apply to the events its dwellers consider “after” the intervention, but to what they presumptively had a right to consider their “past.” So it is hard to imagine how they experience it. Things would actually move normally after the intervention and there’d be no trace of it, it would be like the world suddenly changing in the past here, and we can’t tell. But before: the bullet missing its barrel and seeming to spring out of a tree, and then working backward we might ruin all rational history, it would be like a surreal dream that the inhabitants woke from with no way to know – so I can’t prove it didn’t happen to us! We just have that certain faith in the coherence of what we see, kind of like presuming I’m not a weird construct with simulated experiences, like a Boltzmann Brain?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Aaron, there are a lot of thermodynamic quantities that are hard to talk about far from equilibrium — temperature being an obvious example. But, I would argue (and I’m happy to hear other points of view, as the matters are by no means settled), entropy is not one of them. Given a coarse graining on the space of states, the entropy is k log W, just like Boltzmann says it is. And it tends to go up, for fairly obvious reasons, so long as it starts small. None of that story relies intimately on the properties of equilibrium.

    In particular circumstances, you may be much more ambitious, and talk about the specific ways that entropy and other thermodynamic variables evolve in specific systems, and in that case the issues you raise become crucial. But pointing out the tiny entropy of the early universe doesn’t require that much care.

  • Fubaris

    So the key thing is that Time didn’t reverse. The trajectories of the particles did, which would make it appear to an outside observer that time was moving backwards.

    But in actually, it’s just that the particles are retracing their previous paths, and thus entropy is decreasing.

    But for things to work out, everything has to happen exactly the right way. Any deviation by any particle will cause a whole chain reaction of increasing entropy, because that particle won’t be in the correct place to interact with other particles which themselves then won’t be in the right place to interact with yet other particles, and so on.

    So pretty soon, the whole system goes off the rails and you have chaos…and increasing entropy.

    It wouldn’t just be a matter that the bullet isn’t in the right place, but everyone acts as though it were. That bullet not being in the right place will have a long chain of disruptive effects that will eventually cause the whole march towards lower entropy to unravel.

    But the main point of my original post was just to point out that (barring your outside interference) people in the reverse world would see increasing entropy even in a world with decreasing entropy, IF you take as a given that what we perceive is entirely a matter of our brain states.

  • Aaron Bergman

    What is your definition of a macrostate outside of thermodynamic equilibrium?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    An equivalence class of microstates under some coarse-graining.

    Again, in practice, it might be convenient to define your coarse-graining by reference to macroscopic observables that are not defined out of equilibrium. But that’s a matter of convenience, not a fundamental part of the definition Coarse grain any way you like.

  • http://tyrannogenius.blogspot.com Neil B

    Fubaris you get the issue of what goes wrong in the “time” reversed world. I suppose you are also right about experiencing such a world, since the processes are relative to the beings there and they should feel the same as we do, experiencing their world as moving normally in time. But that equivalence is just one part of the fundamental conceptual problem: if you look at the reversibility of microscopic events, you supposedly get the idea that there is no “true forward direction of time” – it is relative and no more literally real than “true velocity”.

    But then what accounts for the thermodynamic process going the way it does? You can say “chance” etc, but why does the chance favor the “correct” flow of time and make our world robust under alterations, but leave the world in reversal at constant risk for drastic screwup if any little thing doesn’t fit together right? If there’s no real absolute time, how can the worlds even be different from each other? That is a deep philosophical issue and I wish the pros around here would deign to remark on it …

    One issue inadequately brought up in thermodynamic discussions, and made by Roger Penrose is that wave functions aren’t time symmetrical. They spread out from the “emission point” and then the “collapse” is not like that. But some would say the WFs aren’t ultimately real (what is, then?) and claim that decoherence solves the problem of collapse. But “decollusion” doesn’t, because it involves a circular argument (inserting probabilities at the outset – the thing you’re trying to explain) as well as applying ensemble context to the single event. There is still no valid or non-surreal way to produce the one result while excluding the other (like at distantly separated detectors) without a conventional spread-out WF that goes “poof” when detection obtains at one of the possible hitting points.

  • David

    Maybe I’m stupid but i don’t understand why entropy increases or decreases when the total content of the universe does not change. Why are Ice-cubes more ordered than water. The total energy and information contained in the ice doesn’t vanish it’s still there. So for the universe as a whole doesn’t lose any information and I don’t see how one state of matter and of the universe as a whole is more or less structured. Is it only structured by our standards or is there some physical meaning and process to the system that the system would register that it was ordered or disordered. From a particles perspective does it matter if it is water or ice? What physical state of a system makes it ordered or not. I don’t know if
    I’m explaining it well. But the total energy of the universe is zero, matter being positive and gravity being negative, why is there this conception of entropy, from matter/energy point of view? What does it mean to be ordered from the universes perspective.

  • http://tsm2.blogspot.com wolfgang

    >> But, I would argue (and I’m happy to hear other points of view, as the matters are by no means settled), entropy is not one of them.

    And I would agree with you.
    Temperature (mean kinetic energy) may not be well defined for a system out of equilibrium, but entropy certainly is.
    If entropy would only be defined for a system in equilibrium, we would not need the 2nd law, because we would always have dS/dt = 0.

    But there is an issue with gravity, which you glossed over. We simply do not know how to calculate the entropy of spacetime. Penrose made the proposal to equate it with the Weyl curvature, but one can show that there are problems with that.
    Thus his proposal that low entropy in the early stages of the universe means vanishing Weyl curvature is on shaky grounds…

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    wolfgang– I don’t want to gloss over that at all; calculating the entropy of a system in which gravity is important is something we don’t know how to do, and that’s a problem. However, we have fairly reliable estimates in certain circumstances of interest: homogeneous plasma in an expanding universe, or a black hole, or de Sitter space. Armed just with that, we can tell a pretty reliable story of how entropy evolves over the observable history of the universe, even if the details remain to be filled in.

    David– Water vs. ice cubes isn’t the best example; if you had a truly isolated ice cube, it would just stay an ice cube, since there would be no energy available to heat it up. A better example is a warm glass of water with an ice cube, vs. a cold glass of water into which the ice cube has melted. The total energy is the same in both cases, and we can imagine such a system isolated from the rest of the universe. But the configuration in which there is an ice cube floating in the warm glass of water clearly has lower entropy, because it’s out of equilibrium. We are making some macroscopic statement (at the location of the ice cube, the average kinetic energy of a water molecule is much less than at the location of the water) which dramatically restricts the number of microstates that could satisfy that condition. There are many more ways to arrange the molecules in a glass of water at constant temperature than to separate them into water + ice cube.

    The early universe is the same way; there are many ways to arrange the degrees of freedom in the universe that don’t look anything like “packed into an incredibly small region of space with no significant local gravitational fields.”

  • Aaron Bergman

    Temperature (mean kinetic energy) may not be well defined for a system out of equilibrium, but entropy certainly is.
    If entropy would only be defined for a system in equilibrium, we would not need the 2nd law, because we would always have dS/dt = 0.

    That doesn’t follow at all. Entropy is just not a continuous function of time; it’s only defined for equilibrium states. This is the basis of the Lieb-Yngvason approach, for example.

  • http://tsm2.blogspot.com wolfgang

    > it’s only defined for equilibrium states.

    So you are saying that e.g. entropy is not defined for a black hole, because a black hole (radiating into the universe) is not in equilibrium ?

  • Pingback: links for 2009-01-03 < kulturbrille:amanuensis()

  • Pingback: What is The Universe | DesiPundit()

  • Low Math, Meekly Interacting

    Greg, thank you very much for your clear reply to my question, it was very helpful. I hadn’t considered the possibility that one could use the paradox, perhaps by itself, to argue the universe must inevitably decay relatively soon to suppress BB’s. Wonder how one tests that empirically. I must confess I’m still very uneasy with the argument that the utility of statistical mechanics in our horizon provides adequate justification to extrapolate that approach to the megaverse. The whole thing makes me wonder yet again whether or not one should avoid the megaverse like the plague, even if it’s the right answer, because of all the necessary, but potentially untestable, assumptions.

  • Pingback: tomate :: Gravitation and thermodynamics :: January :: 2008()

  • Pingback: tomate :: Gravitation and thermodynamics :: January :: 2009()

  • Pingback: what age did you stop beleaving in the religion you where raised with? - Page 2()

  • Pingback: what age did you stop beleaving in the religion you where raised with? - Page 2()

  • Sean Peters

    You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

    This argument seems incredibly weak to me. Think of it this way: I roll a 10 sided die a hundred times and write down the string of resulting digits. You examine the string of digits and exclaim: “do you know how unlikely it is that you got that exact string? This can’t be a statistical fluctuation”. You can’t look at a situation after the fact and say it couldn’t have happened at random because it’s very unlikely.

    The other argument I’ve heard here – that statistical fluctuations would mean that you couldn’t trust memories of the past – seems more powerful, but still strangely unsatisfying. Apparently the reason that can’t be true is that it would be unpleasant to believe that the universe faked all the evidence of our past. Which is a sentiment that I agree with, but it doesn’t say much about whether the universe actually DID fake all the evidence of our past. I guess I’m willing to write that possibility off on practical grounds – there would be no point in doing science at all if it were true – but it’s still sort of disturbing.

  • http://www.nuax.de Leonardo

    The low degree of entropy is not a consequence of the Boltzmann hypothesis (we are in a low entropy region), but a consequence of the nature of entropy itself. Entropy also generates order. Out of entropy new orders emerge. If I take a perfectly ordered and homogeneous quantity of milk, and mix it with a perfectly non-entropic coffee, both elements loose order, their molecules become entropic and chaotic, homogeneity is lost. But from there emerges a perfectly ordered Cappuccino. Things are not as simplistic as “everything flows from order to entropy”. Chaos creates new orders; what looks entropic at one scale, may have a higher-level entity-like coherence. Leave a dead rat on the forest long enough, and the entropy will act on it, in such a way that its molecules will be reabsorbed by nature and reordered.
    I hereby decree that this theory be called “the Lospennato hypothesis”.
    Just Kidding.
    But not really.
    Regards to all!

  • Fritz Lorenz Doerring

    Decay is obvious and visible, and experiential. Multiverse is not in our present sphere of experience. Might it ever become so? Wait: Be patient!
    It may be a long time. Fritz

  • http://lumma.org/microwave Carl Lumma

    The Boltzmann brain paradox assumes a fixed (and rather naive) prior: that all regions of spacetime are independent. Bayesians would like us to consider the weighted probability over all priors. That’s the same as considering each observation to be the output of a Turing machine on a random input. From such a computational perspective, if the probability of one brain is B, the probability of two brains BB is B. Indeed, most of the time, Earth will sustain either many human brains or none. B can only obtain for ~ 100 years.

    The probability of many brains may still be lower than the probability of none, so a low-entropy initial state is still needed. The Big Bang seems to provide it in spades.

    -Carl

  • http://brainandcosmos.com/ Moninder Singh Modgil

    There is a simple vacuum solution of Einstein’s field equations, obtained from Minkowski universe by the replacement-

    t -> sin t

    Geodesics in this universe (which I refer to as the “Periodic Minkowski universe”), are such that particles oscillate about a fixed position. This leads to recurrence via “Loschmidt’s velocity reversion”, however without the time reversal, i.e., the time parameter continues to increase monotonically. See my paper –

    http://arxiv.org/abs/0907.3165
    Title: Loschmidt’s paradox, entropy and the topology of spacetime

    If one does mixing of two gases experiment, in periodic Minkowski, one will see the gases initially mixing, and then deterministically seperating. And this happens not as a probablistical process, but due to the causal structure of the space time, as encoded in the line element of the periodic Minkowski universe. However, one can still consider probablistic process within this spacetime background – and vacuum fluctuations forming Boltzmann brains. The periodicity constraint would require that any Boltzmann brains created in such a universe, would eventually be destroyed.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »