The Arrow of Time: Still a Puzzle

By Sean Carroll | August 24, 2009 9:15 am

A paper just appeared in Physical Review Letters with a provocative title: “A Quantum Solution to the Arrow-of-Time Dilemma,” by Lorenzo Maccone. Actually just “Quantum…”, not “A Quantum…”, because among the various idiosyncrasies of PRL is that paper titles do not begin with articles. Don’t ask me why.

But a solution to the arrow-of-time dilemma would certainly be nice, quantum or otherwise, so the paper has received a bit of attention (Focus, Ars Technica). Unfortunately, I don’t think this paper qualifies.

The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.

The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.

So you might like to do better, and that’s what Maccone tries to do in this paper. He forgets about cosmology, and tries to explain the arrow of time using nothing more than ordinary quantum mechanics, plus some ideas from information theory.

I don’t think that there’s anything wrong with the actual technical results in the paper — at a cursory glance, it looks fine to me. What I don’t agree with is the claim that it explains the arrow of time. Let’s just quote the abstract in full:

The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.

So the claim is that entropy necessarily increases in “all phenomena which leave a trail of information behind” — i.e., any time something happens for which we can possibly have a memory of it happening. So if entropy decreases, we can have no recollection that it happened; therefore we always find that entropy seems to be increasing. Q.E.D.

But that doesn’t really address the problem. The fact that we “remember” the direction of time in which entropy is lower, if any such direction exists, is pretty well-established among people who think about these things, going all the way back to Boltzmann. (Chapter Nine.) But in the real world, we don’t simply see entropy increasing; we see it increase by a lot. The early universe has an entropy of 1088 or less; the current universe has an entropy of 10101 or more, for an increase of more than a factor of 1013 — a giant number. And it increases in a consistent way throughout our observable universe. It’s not just that we have an arrow of time — it’s that we have an arrow of time that stretches coherently over an enormous region of space and time.

This paper has nothing to say about that. If you don’t have some explanation for why the early universe had a low entropy, you would expect it to have a high entropy. Then you would expect to see small fluctuations around that high-entropy state. And, indeed, if any complex observers were to arise in the course of one of those fluctuations, they would “remember” the direction of time with lower entropy. The problem is that small fluctuations are much more likely than large ones, so you predict with overwhelming confidence that those observers should find themselves in the smallest fluctuations possible, freak observers surrounded by an otherwise high-entropy state. They would be, to coin a pithy phrase, Boltzmann brains. Back to square one.

Again, everything about Maccone’s paper seems right to me, except for the grand claims about the arrow of time. It looks like a perfectly reasonable and interesting result in quantum information theory. But if you assume a low-entropy initial condition for the universe, you don’t really need any such fancy results — everything follows the path set out by Boltzmann years ago. And if you don’t assume that, you don’t really explain our universe. So the dilemma lives on.

  • Michael F. Martin
  • Ja Muller

    Have people used anthropic thinking to claim that that explains why the early universe had a low entropy? It seems like if you say that it does, everything else falls into place as you can calculate the rate at which entropy increases within the standard model + GR and show that Boltzman brains are not likely. I’m not a fan of the anthropic principle for the same reasons as everybody else, but if you admit that it is a possibility, you might as well get as much mileage out of it as you can.

  • Sean

    Michael, I don’t know enough about what that article is describing to have an informed opinion. (E.g. I don’t know what “electromagnetic fields have a known handedness” means.) But on the larger issue, violation of time-reversal symmetry is well-understood, and an important part of the weak interactions in particle physics. That’s slightly different than a violation of reversibility, which is at the heart of the arrow of time.

  • NewEnglandBob

    Someone should write a book about it and publish it by January 7, 2010. :)

  • GTk


    Any comment on this recent posting by Clifford Johnson, in which he essentially accuses you of being the reason why he left Cosmic Variance? I’d appreciate your take on it because as things stand he’s sniping at you from the sidelines.

  • Sean

    I don’t have any comment, and it’s not very on-topic anyway.

  • confused asker of stupid questions

    Can I ask three sets of stupid questions?

    1) If the boundary condition we call the early universe was very low entropy, why didn’t it *stay* low entropy? That is, what causes entropy to increase? Why is entropy increasing practically uniformly across huge volumes of space and over long periods of time?

    2) If the boundary condition we call the late universe will have very high entropy, right down to isolated black holes and stable elementary particles (whee, fun, which is their timelike dimension then?), what reasons do we have for it to stay that way?

    3) What is the macrostate:microstate relationship of the dark sector, especially dark energy, at each boundary condition compared to each other and the present local universe? How do we combine dark entropy with the entropy of visible matter?

  • Sean

    These aren’t stupid questions, but the first two were exactly what Boltzmann worked out long ago. The point about low-entropy states is that there aren’t many of them; the entropy is simply the logarithm of the number of states that are macroscopically indistinguishable. So time evolution naturally takes low-entropy states to high-entropy ones, as there are many more high-entropy states to evolve to. Conversely, high-entropy states tend to evolve to (other) high-entropy states, which look macroscopically the same. All that is true for the universe as well as for an egg or a box of gas.

    We don’t know what dark energy is, but if it’s a cosmological constant, it doesn’t have any entropy at all. But spacetime itself does, and that entropy can be very large. Matter just goes along for the ride when its self-gravity is not important (as in the early universe), but things become complicated and ill-understood when gravity is important (as in the current universe), except when it takes over completely (as in black holes). Then we have a formula from Hawking that tells us the entropy exactly, and it’s a huge number.

  • Dan

    “So time evolution naturally takes low-entropy states to high-entropy ones, as there are many more high-entropy states to evolve to.”

    But if the laws of physics are time-invariant, then shouldn’t that happen in *both* temporal directions? Given a certain amount of entropy at time t, shouldn’t there be more entropy at both t+1 AND t-1? Even if entropy were increasing, why would it have to be *monotonically* increasing, unless there was actually a law acting at every point in time, as opposed to merely an initial condition?

  • Aaron Sheldon

    A couple of quick thoughts:

    First there is an interesting connection between the idea of a quantum trail of information and Schrödinger’s Cat, namely that the cat isn’t in a perfect superposition of alive and dead during the experiment, because after the box has been opened one can do a post-mortem autopsy if the cat is dead to determine (roughly) the time of death. That is there is a quantum trail of information encoded in the final state of the cat.

    Second, it is not true that quantum physics is in general invariant to time inversion, one needs to be more precise and state that quantum mechanics is inversion invariant only if the space-time manifold is unbounded (either closed or open is fine). On the other hand, if there is a boundary in the manifold then quantum mechanics (specifically shift operators, which are the Lie Algebra generated by exponentiating differential operators) is not invariant with respect to time inversion. Sure it is a bit of a legal loop hole, but its an important one.

  • Sean

    Yes, and that’s part of the puzzle. Given a low-entropy condition at some time, and no other information, you would expect entropy to grow to both the past and the future of that moment. Of course this is not a worry if that moment is truly an initial condition, as there is no “past” of that moment.

  • Joseph Smidt

    I read the paper too and have a pretty elementary question but one that has been bugging me.

    The paper seems to say if entropy were to decrease our memories would be erased so we couldn’t know about such processes. However, we can know about processes where entropy increases hence we can “remember” the past. fine.

    But why does entropy only increase in one direction? If there are “time-reversal” symmetries to nature, why is there a preferred direction to the increase in entropy? To me this paper makes the question go from “why does time only flow in one direction despite time reversal” to “why does entropy increase in only one direction despite time reversal.”

    What am I overlooking?

  • Jim

    If “phenomena where the entropy decreases must not leave any information of their having happened”, then entropy could be decreasing right now as much we perceive it to be increasing and we wouldn’t know it.

  • Joseph Smidt

    It appears others are posting similar comments that didn’t exist before I posted my comment, but your blog makes comments sit in the queue for several minutes so it looks like I am just repeating previously asked questions. Sorry for that. And by the way, great post! :)

  • Aaron Sheldon

    That entropy increases ‘in distribution’ is actually just one form of a Central Limit Theorem. And in fact entropy would increase ‘in distribution’ in any direction of translation as long as the number of quantum observables is sufficiently large.

    The interesting part is that the questions ‘why does time move in one direction?’ and ‘why is there a beginning to time?’ are equivalent, at least in quantum theory.

  • John R Ramsden

    @Sean [8] “The point about low-entropy states is that there aren’t many of them; the entropy is simply the logarithm of the number of states that are macroscopically indistinguishable.”

    Won’t that in a sense be the case for the universe in the distant future, assuming it doesn’t collapse and ends up comprising only extremal/eternal black holes? Of course one has to “zoom out” so to speak, and somewhat fictitiously regard each black hole as a single “unit”, disregarding its intrinsic entropy in the usual conventional sense, rather like the details of a fractal at some given scale fade away into insignificance as the scale increases and a fresh pattern comes into view, and this can be continued indefinitely.

    Also, you mentioned the collapse problem in passing. Do you know if anyone has seriously considered turning this problem round, and positing that wave functions are highly
    dissipative and constantly collapsing but that narrow-width “spikes” constantly arise by some means and regenerate the wave function (unless it collapses in the conventional way), in other words assuming that the problem is not what causes collapse but
    what *prevents* it, and seeing where that might lead? May sound a bit kookish, but no end of major scientific advances have been attained by trying, usually reluctantly, the very opposite of long cherished assumptions.

  • Aaron Sheldon

    At the risk of sounding trite, but the only thing that prevents wave function collapse is willful ignorance.

  • weichi

    It isn’t really true that he forgets about cosmology. He discusses it at the very end:

    “In a quantum cosmological setting, the above approach easily fits in the hypothesis that the quantum state of the whole Universe is a pure (i.e., zero entropy) state evolving unitarily (e.g., see [29,30]). One of the most puzzling aspects of our Universe is the fact that its initial state had entropy so much lower than we see today, making the initial state highly unlikely [4]. Joining the above hypothesis of a zero-entropy pure state of the universe with the second law considerations analyzed in this Letter, it is clear that such puzzle can be resolved. The universe may be in a zero-entropy state, even though it appears (to us, internal observers) to possess a higher entropy. However, it is clear that this approach does not require dealing with the quantum state of the whole Universe, but it applies also to arbitrary physical systems”

    I don’t really understand this “quantum state of the whole Universe” stuff, or how “internal observers” could think that the universe is not in a pure state, even though it really is. Leaving that aside, he seems to be trying to shift the question from from “why did the early universe have low entropy” to “why is the universe in a pure state”. But even then don’t you still have the question of “why is the particular pure state of the universe such that the early universe appeared (to internal observers) to have low entropy?”. Shouldn’t all the usual arguments imply that such a pure state is very unlikely?

  • weichi

    Also, I’m no expert in this area, but my recollection is that the stat mech argument that entropy will always increase isn’t 100% rigorous. In particular, it relies on an ergodic hypothesis that all states are equally likely to be occupied. We have no evidence against this, but I don’t think it has been proven, and I don’t think anyone has ruled out the existence of hidden symmetries that would violate this assumption. So perhaps the interesting advance here is that he has an argument that says that even if there are some hidden symmetries that cause entropy to sometimes significantly decrease, no observations could ever demonstrate that this happens? In other words, the lack of observations of entropy-decreasing processes can’t be taken as evidence against the existence of such symmetries.

  • Sean

    John Ramsden– You’re not really allowed to “zoom out” and ignore the internal states of black holes, etc. You just have to sum all the states. The far-future universe will be *simple*, but it will have a high entropy.

    On quantum mechanics, what you’re asking about is very close to the GRW model.

    weichi– The rigorous argument is “of all the states corresponding to any low-entropy macrostate, the vast majority will evolve to higher-entropy states.” But *some* will evolve to lower-entropy states; just take the time-reversal of an ordinary configuration that has evolved from a low-entropy beginning. We normally assume that all states consistent with known constraints are equally likely, so entropy is very likely to go up, but that’s just an assumption. (But it’s a much weaker assumption than the ergodic hypothesis, which is a bit of a red herring.)

  • Haelfix

    We observe the universe now. We can rule out the BB hypothesis by waiting roughly 1 second and noting that everywhere we look galaxies don’t suddenly fly away or become crazy. Ergo we can conclude (even without actually looking back in time) that the universe was in the past at a lower entropy state than it is now.

    Why? B/c it couldn’t be any other way for that observation to be true and for us to be here (assuming the BB is false). The absolute magnitude of the past entropy (and our current entropy) is an interesting question, but I don’t see where the paradox is. This is a perfectly good use of the anthropic principle.

    That was Feynman’s argument, and I still fail to see why people are so enthralled by this observation.

  • Alan

    Hi Sean,

    I understood the point of the final paragraph to mean that if the Universe was in an initial pure state, and evolved unitarily, then the apparent growth in entropy is the product of what measurements can, in principle, be made.

    This would require that for the entropy we measure to have grown as much as we observe, there be an extraordinarily large number of entropy decreasing events happening concurrently. All of which are not measurable. Do you believe this is possible or likely?

  • Lorenzo Maccone

    Dear Prof. Sean Carroll,

    I’m Lorenzo Maccone, the author of the paper you write about. First of all, let me thank you for your interest in my paper. I would like to reply to the arguments you give against my paper, if I may.

    Essentially, you have two objections: 1) “The fact that we remember the direction of time in which entropy is lower, if any such direction exists, is pretty well-established”. 2) In my paper I don’t give any explanation on why the universe is in an initial low entropy state so there’s no advance with respect to the old Boltzmann’s ideas of the
    universe’s initial state having appeared as a fluctuation.

    My reply:

    1) I agree that it is well established that we remember only the past defined as time the direction where entropy decreases. However, I haven’t seen any convincing EXPLANATION of why this is the case. In fact, if I restrict myself to Boltzmann’s physics (namely classical mechanics) I cannot think of any such explanation: nothing would prevent us from remembering a humongous fluctuation in which we see an egg previously dropped on the floor coming back together and flying
    back to our hand. In classical mechanics nothing prevents some of the correlations that the egg created with the kitchen’s degree of freedom to remain untouched. Classical information can be copied at will without affecting entropy… Then, we could remember also events where entropy is decreasing!

    In quantum mechanics this is not true. If we want to restore the initial state of the egg (namely remove all entanglement between egg and kitchen that was created when the egg broke), ALL correlations between egg and kitchen must be erased. NO information on the egg’s
    breaking can remain in the environment. Any information that remains will be due to entanglement between egg and kitchen and will determine an increase in the entropy.

    In conclusion, I give an EXPLANATION (based on quantum mechanics) on why our memories refer only to the direction in time where entropy increases. I think that (although formally very straightforward) this is not so well established. I have studied the literature quite carefully, and I haven’t encountered this idea anywhere.

    2) It’s true that in my paper I don’t give an explanation on why the initial entropy of the universe is so low (I give one below). However, Boltzmann’s main problem (you point it out yourself in your blog) is not that the initial entropy of the universe is low, but rather that it’s so much lower than today’s and there’s no convincing explanation of that (certainly the fact that it derived from a fluctuation is
    unsatisfactory, as we all know).

    Now what I point out in my paper is that my explanation is fully consistent with the fact that the universe’s entropy is ALWAYS (initially and NOW) in a zero entropy pure state. The fact that it doesn’t appear so to us is just because we are subsystems of the universe and (in quantum mechanics) a subsystem can have entropy
    higher than the whole system. Let’s not forget that entropy is a subjective quantity that depends on the observer’s information (I can definitely elaborate more on that, if you’re not convinced).

    One last thing: why should the universe state be in a zero entropy pure state? Since the universe by definition cannot be entangled with any other system, then its state (to an hypothetical observer that has complete information on it) will be pure. Short of entanglement, there’s no FUNDAMENTAL reason why the state of a system cannot be pure (only the non-fundamental subjective lack of knowledge of the observer). [A system in a non-pure state is in a mixed state, namely in a state |psi_i> with probability p_i<1: this comes about either because the system is entangled with another, or because the observer
    is missing some information.]

  • Ahcuah

    Possible dumb question: OK, suppose the universe had started in a big bang with high entropy. How would it have looked different than it did? After all, it did start with a pretty random “gas” of very high energy particles.

  • Sean

    Lorenzo originally wrote to me in email, and I responded — here is his response to my response.


    Sean Carroll wrote:

    Hi Lorenzo– Thanks for writing. I should first say that it would
    be much better to comment at the blog, where everyone can learn
    something (and people chiming in might even teach us something)!

    Hi Sean, thank you for your quick answer. I’m sorry if I wrote to you privately: it didn’t occur to me to write you on the blog. I have no problem in making this debate public, so I’ll be posting my previous answer on your blog. If you post your answer, we can certainly continue discussing publicly.

    Regarding your email, I’ll try to clarify my position below…

    On your first point, I don’t have too much disagreement. As I said in
    the post, everyone agrees that we remember the direction of lower
    entropy. But actually proving it is harder, and will necessarily be
    context-dependent. As far as I know your paper does this for quantum
    mechanics, which I haven’t seen before.

    Ok, this was the main message of my paper. I’m glad you have no problem with it!!!! I hope you don’t really think it’s a trivial result.

    Regarding the rest, you write:

    But the second point is the important one, and I still don’t agree,
    and in fact your comments make me more confused. The statement “the
    universe is in a zero-entropy pure state” could plausibly be true for
    the von Neumann entropy (or for the analogous Gibbs entropy in the
    classical context), but isn’t especially relevant for the question of
    the Boltzmann entropy (S = k log W) or thermodynamic entropy, and it’s
    those that are responsible for the arrow of time.

    There is a lot of literature that show that von Neumann entropy and thermodynamic entropy are the same for quantum systems. Think of all the quantum Maxwell demon literature, or of the Szilard engines: it is clearly shown that one bit of thermodynamic can be exchanged for one bit of von Neumann entropy and viceversa. Unless you restrict to classical systems, I don’t see any difference between von Neumann and thermodynamic entropy: they are indeed equivalent.

    In a nutshell, even if the universe is in a pure state, we don’t
    know what state it is, and it’s a state that is macroscopically in
    distinguishable from a very large number of states. In that sense,
    the entropy is high, and was lower in the past.

    I agree with this, but you have to distinguish between the different points of view. From OUR subjective point of view entropy is indeed higher now than in the past (this is because we are subsystems of the universe). From a SUPEROBSERVER point of view (someone who can keep track of the unitary quantum evolution of the whole universe), the entropy is CONSTANT (since unitary evolution preserves the entropy). In my last mail I gave an argument to say that it was initially zero and this means it is still zero.

    I’m not sure I follow you when you say that the state of the universe is pure but we don’t know what it is: this means that it is mixed (namely we are assigning a certain probability to each pure state). I agree that then the entropy is high (the entropy of a mixed state is always different from zero). However, that is just because of OUR ignorance (call it coarse-graining, if you prefer), whereas the superobserver would have no problem saying the state is pure.

    Please remember that since entropy is a subjective quantity, one must always specify WHO is the subject. (More on this below.)

    There are other ways of defining entropy, but the puzzle for cosmology
    is that the universe began in a low-entropy macrostate — one that was
    indistinguishable from a very tiny number of other microstates. That
    notion of entropy is not subjective, and it doesn’t depend on an
    observer’s information, it only depends on a coarse-graining. That’s
    at the heart of the arrow-of-time problem, and I don’t see how your
    paper addresses that issue.

    Ok: you say entropy is objective, I say it’s subjective. This is a fundamental difference between our views, and I’ll try to convince you of mine.

    Think of the following (classical) example (it can be easily extended to quantum mechanics). Consider two boxes of gas where the microscopic degrees of freedom are perfectly correlated, namely each gas molecule in one box is in the same position and moves exactly in the same way (same direction and same speed at each time) as a gas molecule in the other box. A person that doesn’t know this just sees two boxes at the same temperature and cannot extract ANY work from it: to him, they are at thermodynamic equilibrium. Instead, a person aware of that correlation can easily devise a system of pistons connected with pulleys that can extract some work from the two boxes! (Just put two pistons that lower a weight when they move in opposite direction and lift a weight when they move in the same direction).

    Note, however, that the subjectivity of entropy is purely academic: in ANY practical circumstance all observers will basically have the same information on different systems (since the information involved in macroscopic systems is immense), so that IN PRACTICE thermodynamic entropy is basically an objective quantity, as any engineer would swear! However, when you look more carefully into it, you see that IN THEORY the thermodynamic entropy is indeed subjective, and the above example clearly illustrates this subjectivity.

    Maybe when you say that entropy is objective you mean “for all practical purposes” (FAPP). Then I totally agree with you! Its subjectivity is something that is basically impossible to take advantage of in ANY practical situations!!

    I hope this answers to your concerns. Otherwise, I’ll be glad to further continue this debate either via email or on your blog. Thank you again for your interest!


  • Sean

    Thanks for joining in, Lorenzo. I don’t want to give a point-by-point detailed response, as it will get quickly out of hand. But I think the main sticking point is this question of whether entropy is objective or subjective.

    I didn’t say that entropy was objective rather than subjective; I said that there are different definitions of entropy. One way of defining entropy is to focus on our subjective ignorance of the precise microstate of the system; that’s what Gibbs did, and the von Neumann entropy in quantum mechanics basically does the same thing. That’s fine for certain purposes, but not for others. In particular, for a closed system that entropy never changes! So you wouldn’t believe in the Second Law if that’s all you had to go on.

    Alternatively, you can define entropy without any mention of our subjective information, following Boltzmann. You do have to do something, namely you have to choose a coarse-graining on the space of states, typically in a way that conforms to your ability to do macroscopic observations. But once you choose a coarse-graining, the entropy is completely objective, and it’s given by the formula on Boltzmann’s tombstone, S = k log W. There’s nothing in that formula that makes reference to our knowledge of the system. And that entropy can happily increase, in accordance with the Second Law.

    My point is that it’s this second notion of entropy that is relevant for the arrow of time in the real world. If you like, you can completely forget about entropy, information, our knowledge of the pure state of the universe, etc., and just say this: the early universe was in a state that is extremely finely tuned, one of only a very small number of states with those macroscopic characteristics. The shortcut way of saying that is “the early universe had a low entropy,” but you don’t need to use those words.

    And that fact about the fine-tuning of the early universe is (1) perfectly objective, given a sensible macroscopic coarse-graining; (2) all you need to account for the arrow of time; (3) unexplained by the physics we know and love, including what’s in your paper.

    So, again, I think your results are fine, but fall short of explaining the arrow of time.

  • Sean

    Ahcuah– The particles look random, but the gravitational field was smooth to an extraordinarily finely-tuned degree. A higher-entropy state would look something like a time-reversed Big Crunch — full of singularities and wild inhomogeneities.

  • Joseph Smidt

    Okay, I have been thinking of past and future as going from “negative infinity” to “positive infinity” on the parametrization of some world line.

    However, is it the case to say the limit of our past as perceived by an observer is really just the lowest entropy state of the universe?

    In other words, if I marched back along my worldline (assuming it extended to negative infinity) would it appear as if I was moving into my past, *until I reached the lowest entropy state of the universe*, at which time after I pass through this state, continuing along my worldline toward negative infinity would it appear as if I was marching into some new unknown future?

    I don’t know if I worded this clearly. Maybe another way to say it is: say my worldline is parametrized between negative infinity < T < positive infinity where T is the point of lowest entropy in the universe. When I am at a point x such that: negative infinity < x < T, I perceive time going from x to the negative infinity direction but when I pass through T so that now T < x < positive infinity my perception of time swaps going from x to the positive infinity direction?


  • Wanu


    The fundamental flaw in the paper seems to be that it merely buries Loschmidt’s paradox beneath another layer of formalism. It is quite obviously true that people only learn about systems in a forward-time-direction when the entropy of those systems is non-decreasing in a forward-time-direction, as the paper argues. But, just as obviously, the previous sentence is also true if you replace both instances of the word “forward” with “backward”!

    Indeed, it is easy to imagine a universe—or, perhaps, a distant, causally disconnected corner of our own universe—in which people learn backwards in time about systems whose entropies grow large going backwards in time. Of course, to these people, “backwards in time” would mean “fowards in time.”

    But, in any event, we’ve solved nothing! A universe with time-reversal-invariant fundamental laws that allows people to learn forwards in time about forward-time-entropy systems obviously also allows “opposite people” somewhere else to learn backwards in time about backward-time-entropy systems. So how have we evade Loschmidt’s paradox, even in the seemingly modest way that you grant the paper? Isn’t the paper actually useless?

  • Sean

    Joseph– If I understand you correctly, I think that’s basically right. The only nitpick is that it’s hard to imagine “you” traveling backwards in time and actually experiencing these things. You can never experience “time running backwards,” because our experience of time is tied to the growth of entropy.

    Wanu– I don’t think that’s quite fair. Loschmidt’s paradox rests on the fact that there are just as many decreasing-entropy trajectories through phase space as there are increasing-entropy trajectories, which is true (assuming reversible dynamics). Therefore you can’t mathematically prove that entropy increases along most trajectories, which is what Boltzmann was trying to do at the time, and L’s criticism was fair.

    This paper isn’t trying to do that. The claim here is simply that memories accumulate in the direction of increasing entropy, which is fine. I think it’s a claim that most people in the field already accept, but it’s always nice to see an explicit demonstration.

    The problem is that the overwhelming majority of trajectories have the feature that entropy doesn’t change at all — it starts high, and stays high — and deviations from that tend to be very small. Our observable universe is a large deviation, a fact that remains to be explained.

  • Pingback: 24 August PM « blueollie()

  • Lorenzo Maccone

    Dear Sean, thank you for your reply. I think you are not agreeing with my premises. Then it’s useless to go on discussing about the consequences: it’s clear that if
    we use different premises we will generally disagree on the result.

    My premise (clearly stated in my paper) is basically only ONE: I assume that quantum mechanics is valid at all scales.

    If you accept that, then it’s nonsensical to write about a “sensible macroscopic coarse-graining” as you do in your answer above. There is no such thing as a sensible coarse graining anywhere in quantum mechanics: a sufficiently powerful observer can keep track of all the unitary evolution of all microscopic degrees of freedom.

    If you consider my premise invalid, then you should say so explicitly (and our debate will presumably end there), but if you want to prove that my argument is fallacious, you should USE my premise and logically derive a different conclusion.

    Thank you for the interesting debate.


  • Nick Huggett

    I shouldn’t jump in without reading the paper a bit more carefully, but it apparently proceeds from T-reversal invariant dynamics, so by symmetry it seems it should equally allow for past entropy increasing processes (which it clearly does) AND future entropy decreasing processes, BOTH leaving information in the present. Otherwise it some arrow, some asymmetry is already presupposed. So we are left with the question of why we have no record of the future. (It seems the paper does not contemplate literal collapses to an eigenstate, because it discusses measurements being undone. So it seems the only dynamics is of T-symmetric Hamiltonian/unitary kind. It also does not seem to consider the T-reverse of its own analysis. But I was a bit cursory, so I apologize if I am wrong.)

  • Sean

    Lorenzo– I have no trouble agreeing with your premise that quantum mechanics is valid at all scales. Nor do I disagree with any of the results in your paper. I just disagree that they solve the arrow-of-time problem.

    I don’t understand what you are saying about coarse-graining in quantum mechanics. We do it all the time; we consider the set of states with certain expectation values for certain observables, etc. Of course if you know the exact pure state, and the exact Hamiltonian, you can evolve the state unitarily in either direction of time. But if you apply that to a closed system, the entropy is strictly constant. (In an open system it can of course change, due to entanglement.) So that’s clearly not what we have in mind when we talk about the arrow of time. For that, you need to coarse-grain. (The universe is, to a good approximation, a closed system.)

    Nick– Thanks for joining in. I think the paper does assume reversible dynamics, and that both entropy-increasing and entropy-decreasing processes are allowed. But the point is that we would only remember the lower-entropy direction of time, so that any entropy-changing process is experienced as an entropy-increasing process. My objection is that this doesn’t explain why the entropy has been increasing consistently for billions of years over billions of light-years.

  • Blake Stacey

    My sleep tank is down to the fumes right now, so this might make no sense at all, but I might as well ask anyway:

    Why is the direction of increasing entropy also the axis picked out by the different sign in the Minkowski metric?

  • Sean

    Basically because the time direction is the one in which you have a well-posed initial value problem. From “the entropy was low in the past” we can derive “the entropy will increase with time,” whereas from “the entropy is low on your left” you can’t derive “the entropy will increase as you move to the right.”

  • Blake Stacey

    That gives me a wonderful idea for a science-fiction story. . . .

    (Greg Egan has probably gotten there first. Or, rather, leftwards.)

  • jr

    Given Minkowski geometry – why isnt time measured with a ruler instead of a
    clock ? If it were ++++ we would expect rulers.

  • Lorenzo Maccone

    Dear Sean, thank you for continuing the debate. Sure WE do coarse-graining all the time, but that’s hardly a FUNDAMENTAL limitation in our theory. It’s just due to the limitation of the information WE (subjectively) can access. If you assume quantum mechanics is valid at all scales, this limitation CAN be dropped (by a sufficiently powerful observer).

    What I’m saying is that a powerful observer will see a constant entropy evolution whereas a “normal” observer INSIDE the universe will see the SAME evolution as a coarse-grained/entropy-increasing one. The subjectivity of the entropy is the key in understanding why our observer time orientation coincides with the entropy-increasing direction, even though the physics (on a universal superobserver scale) is time symmetrical.

    In other words, our coarse graining is due to the fact that we are losing part of the information due to the correlations that build up between us and our environment (these are the ONLY INTRINSIC limitations that we have: I’m neglecting the plain surpassable ignorance here). A superobserver (that remains factorized from us and our environment) wouldn’t have these limitations even when he’s assisting to our same time evolution. What he sees as a time-symmetrical entropy preserving evolution, we see as an entropy increasing one.

    Thank you for your debate, and I’m looking forward to hearing back from you soon.

  • Michael Buice

    Hi Sean, it’s been awhile, I hope you are well.

    I’m going to have to agree with Lorenzo here. I think the connection between the two premises you are using is that the manner of chosen course-graining is equivalent to stipulating your knowledge of the system. Classically, identifying the degrees of freedom at all scales is equivalent to stating complete knowledge of the dynamics. This is the classical equivalent to the super-observer in quantum mechanics.

    On the other hand, specifying a macroscopic course graining is equivalent to stating ignorance about what happens beyond the specified cut-off. The second law in an information context states that one’s ignorance about small scale degrees of freedom will “infect”, via interactions, the larger scale degrees of freedom, ergo entropy increases. Lorenzo’s result seems to be the quantum mechanical analog of this fact, that quantum entanglement is responsible for the apparent increase in entropy of a sub-system of a larger pure state.

    I can give you an explicit construction for a completely classical, time reversible system from a paper I’ve written with Carson Chow ( We studied the Kuramoto model of coupled oscillators at finite size by constructing a path integral for the population statistics. The inverse size of the population (1/N) serves as a loop expansion parameter. The relevant fact for the present discussion is that, although the dynamics is completely time reversible the population statistics are not. It depends upon whether you ask questions about the past or the future. In fact, the neat result is that the loop corrections are responsible for the appearance in the population dynamics of a diffusion operator. The interpretation of this is that in the mean field limit, an individual oscillator decouples from the system, i.e. information about it at one time provides perfect information about it in the future and the past, whereas at finite size one’s ignorance about other oscillators produces a “random walk” behavior. The system relaxes to a higher entropy state, conditioned on our knowledge at a previous time. This is exactly the result of Boltzmann’s H-Theorem, without an assumption of “molecular chaos”, it falls out exactly, but it depends upon having a degree of ignorance about the system’s detailed dynamics. Knowing the entire configuration at one time restores the time reversal symmetry.

  • Sean

    Sure, there is a very close connection between the “subjective” and “objective” notions of entropy. If I tell you not only that a system is in a certain macrostate at the present moment, but also that it was in a lower-entropy macrostate at some previous moment, you have a lot more information about the current state than if I had only specified the current macrostate.

    All of which is beside the point I am trying to make, which is in danger of getting lost among the interesting discussions of different notions of entropy. Namely: the early universe was in an extremely finely-tuned state, one that (given the coarse-graining implied by our macroscopic observables) we label “low entropy.” If you explain that, you explain the observed phenomenological arrow of time; if you don’t, you don’t. And nothing about quantum mechanics implies that the early universe had to have such a low entropy — it could have been much larger!

  • Spiv

    Michael Buice: maybe I’m misinterpreting, you’re saying you can predict the future population statistics but not the past ones? Or visa verse?

  • Nick Huggett

    Sean – it is nice to converse again! Anyway, I didn’t intend my remarks towards yours, but towards the original paper, to make a distinct argument that it doesn’t solve all the problems of the arrow of time (a low entropy initial condition does solve the problem, I believe). I’d be very interested to hear what Professor Maccone has to say, since it’s such a neat paper, so maybe if I spell it out a bit more he will be tempted to respond.

    Basically, the paper shows that processes ABC (with time increasing left to right) in which entropy increases are ones which leave information behind after C while processes XYZ with decreasing entropy leave none behind after Z. But if the dynamics is T-reversal invariant – as the unitary dynamics is, unless there’s an assumption about the Hamiltonian I’m not seeing – then it also follows that processes CBA (again with time increasing left to right) in which entropy decreases are ones which leave information ‘behind’ BEFORE C, while processes ZYX with increasing entropy leave none ‘behind’ BEFORE Z. (Technical aside: of course I should use the time reverse of state A, B, etc, but for brevity did not – in QM, the time reverse of a state is its complex conjugate, so more carefully I should have written, e.g., C*B*A*.)

    Just to be clear, my point so far just applies T-reversal to the result of the paper. Disputing what I just said requires demonstrating that the time reversal of the process is not possible, and hence explaining why the dynamics is not T-symmetric, and hence showing that an arrow is built into the dynamics – which it isn’t in standard unitary QM.

    Then the argument of the paper goes, therefore of processes in the past, we can only ever see records of those which are entropy increasing, any in which it decreased leave no trace. (Aside: to be honest, it’s hard to believe that. It means that if I ever were to watch a cup of water spontaneously boiling, or spontaneously freezing, then I couldn’t remember it. Really? I suppose the suggestion of the paper is that such things are happening all the time, but never leaving a trace.)

    But by symmetry we can say the T-symmetric thing about the T-reversed processes. Of processes in the future, we can only ever see ‘records’ of those which are entropy decreasing, any in which it increases leave no trace! Well that seems right about the entropy increasing processes, which we expect in the future, but of course there is in experience a complete and utter absence of ‘records’ of the future and hence of the entropy decreasing processes – but nothing has been said to explain why, as far as I can see. (Using natural language, we would generally call ‘records’ of the future ‘portents’, but I prefer to stick with ‘records’ to make the use of T-symmetry manifest.) By the T-symmetry of QM, for every entropy increasing process there is an entropy decreasing process, its T-reverse; by the proofs of the paper, if the former leaves traces after, the latter leaves traces before. Until it is explained why there is only one but not the other, an arrow remains. Perhaps in the future, but not in the past, we appeal to Boltzmannian considerations – entropy decrease is unlikely? But (a) the asymmetry of explanations is unpleasant, and (b) that seems to give ground to Sean’s views.

    A couple more points. (i) Basically I’m pointing out a hidden asymmetric assumption – ignoring the T-reverse of the entropy increasing, information leaving processes. The philosopher Huw Price is has done much to unearth these in other contexts. (ii) Despite what I said earlier, it’s of course not true that we know nothing about future processes, we know a great deal about what will happen, we just don’t ‘remember’ it even in a T-reversed sense. For example, I know that the coffee in my cup will be cooling and heating up the room, leading to entropy increase. So if such knowledge is possible, why then shouldn’t the T-reverse be possible? That would be knowledge of a process in which the coffee spontaneously warmed up, decreasing the entropy. Of course we don’t think such processes occur, but if they did, why shouldn’t we know about them? But that is contrary to the claim of the paper, that because they leave no traces, such processes are unknowable — but by symmetry they are just as knowable or unknowable as future entropy increasing processes. (iii) It seems that information about the past shows that processes satisfy various conservation laws. It seems that could only be the case if either (a) we have information about all the processes that did occur, or (b) they were energetically (etc) isolated from any other processes, otherwise if there were transfer between the processes we see, and those we don’t, then there wouldn’t be conservation in those we see.

    Ok, that turned into much more than I intended. The bottom line is: what about the T-reverse of the entropy increasing record leaving processes? And, I also agree with Sean’s points, which do take care of the issues I raise.

    Thanks, Nick

  • Leonardo

    [I will try to keep this short!]

    Dear Dr. Caroll,

    I’m a regular reader of CV and specially of your posts that I really enjoy. Maybe this would be an interesting moment to comment on your idea of large/small entropy in the early universe and the arrow of time problem, I would like to have your feedback if possible. I’ve seen one of your lectures on this topic available on Google Videos, so I draw information about your view from there mostly. I have a criticism regarding your stantement that once one accepts that the entropy is small in the early universe, then the problem is solved by Boltzmann’s physics. I believe you may be referring to the H theorem:

    dS/dt >= 0

    However, Boltzmann H theorem is not a statement that S always increases, as was pointed out immediatly after the publication of the theorem by Loschmidt. Suppose that one observer sets his coordinate system and experiences an increase in entropy for what he calls positive time intervals. Then another observer related to the first by a time reversal can still satisfy Boltzmann H theorem by detecting decrease in entropy. So if we allow time reversal in the underlying mechanics, in a single universe there can be both observers seeing increase in entropy and observers that see decrease in entropy. It is a physical requeriment not contained in Boltzmann H theorem that spacetime must be time orientable. Therefore, when one concludes that the entropy always increases or remains constant in a physical process from Boltzmann H theorem one is already assuming that no time reversals are allowed in the fundamental theory.

    There are also several other important criticism to this view that the arrow of time problem is reduced to a question of the amount of entropy in the early universe. First, entropy to some extent is a subjective concept, because it is the amount of missing information in a chosen description of a system. If I solve Newton’s equations for 1 mol of atoms, assuming they obey such an equation, then the total entropy is zero. However, if I decide to average over all velocities and positions keeping the total kinetic energy fixed E and omit all information on the individual positions and velocities, then on maximizing the amount of missing information I get the entropy of the system and a Lagrange multiplier, the temperature. If I also want to keep the number of particles fixed I get an extra parameter, the chemical potential. There is nothing original in this, I’m just describing the work of Jaynes on Statistical Mechanics. Hence, entropy is a probabilistic artifact of a choice of description. It is hard to see why time would be strictly related to the fact that I’ve chosen to omit certain variables. It’s much more likely that there is no such thing as time reversal in the underlying laws, which also is what you need in order to conclude that S can only increase (or decrease, but not both in the same universe) from Boltzmann H theorem.

    Secondly, we see that gravity does have a time arrow. Consider a Schwarszchild black hole for instance. You have geodesics that can start in the interior of the black hole and end in the exterior, just write the radial infall geodesic with the time parametrization reversed. However, by a *physical assumption* (or experimental fact!) if one observer can fall inside the black hole, we *postulate* that the time-reversed geodesic is not allowed in the same space-time. Hence, one gets a black hole. But even with such restriction, Einstein’s equations allow you to write the time-reversed sector, the white hole in the continuation of Schwarszchild spacetime, but we see no astrophysical white holes, only black holes. From GR you would be forced to conclude that if fundamental physics does really have a T symmetry, for every astrophysical black hole in the universe there should be a white hole, but this is not what is seen. So time arrow must be inserted, built, even in fundamental physics. This is such a simple conclusion to draw from Schwarszchild spacetime that I don’t see how it could be wrong, but maybe I am.

    Thirdly, Boltzmann H theorem, and my understanding of Statistical Mechanics and equilibrium systems, says that entropy can stay constant. It is just not true that “time stops” if a system is in equilibrium: individual particles still collide, and one still have to apply to these constituents the scattering theory in which “past” and “future” are defined (in fact, such a scattering theory with sharp definition of past and future is the starting point for proving Boltzmann H theorem). It is hard to see theoretically how one would justify the arrow of time or even understand what time is if one insists that the passage of time is related exclusively to the observation of entropy increase since we can perform gedankenexperiments in which S stays constant.


  • Lorenzo Maccone

    Dear Sean, I have done exactly what you’re asking for. I have given you a good reason why the universe should indeed be in a zero entropy state (which is the fine-tuned state you want).

    In addition, I have shown that (thanks to QM) this zero entropy state of the universe appears SUBJECTIVELY to US to be in a much higher entropy state (because we are entangled subsystems of the universe). It is in a zero entropy state only from the SUBJECTIVE point of view of a superobserver.

    Then, I have given you a reason why, even though the superobserver sees a constant entropy evolution, we see an entropy-increasing evolution (which is the arrow-of-time).

    This is exactly what you’re asking for. I don’t see what is the point on which we are disagreeing.

    Thank you for keeping up with the debate! I’m hopeful that we are converging to a common understanding, and I appreciate that.
    All the best,

  • weichi


    But the fine-tuning isn’t that the universe is in a pure state, is it? Does “universe in a pure state” always imply “any observer looking at the early history of the universe will see a low entropy state”? Couldn’t there be pure states of the universe where the entopy seen by observers in the early universe was high?

  • jr

    if space and time are on the same footing then how come we do not
    have to apply entropy to space ?

  • Brian Mingus

    Dear Lorenzo,

    You claim that only two people are capable of having the information needed to conclude that the entropy in the universe is constant, a superobserver, and the mathematician who proved it is so. But as we well know from the plethora of phenomenon predicted by string theories it is all too easy to jot down an equation without being able to prove, in a more rigorous and scientific sense, that the theory is correct. So I would like to ask, is your theory falsifiable? How might I go about proving it to be false? Also, does your theory make any new predictions about the universe? Lastly if I may, in what sense do you consider this result to be science?

    Thank you,

  • Huw Price

    Hi Sean

    At risk of bringing even more considerations into play, here, with Lorenzo’s permission, is the text of three emails we’ve exchanged over the past couple of days. It is my turn to reply, and I’ll do that here, when I’ve figured out which issue it might be most useful to concentrate on.



    Dear Prof. Price,
    the scientific journalist Michael Slezak has recently contacted you about my theory on the thermodynamic arrow of time arising from quantum mechanics [PRL, 103, 080401 (2009)].

    Since your objection to my theory is very non-trivial but can be easily overcome, I wanted to answer to your objection. I’m quite familiar with your book and with various other publications of yours, and I know you ferociously criticize the “no collapse” interpretation of quantum mechanics. However, you can certainly admit (as a pure hypothesis) that unitary quantum mechanics can be valid at all scales. This is the explicit (and unique) starting point of my theory. You might not agree that this is the case, but please indulge with me (consider it as a purely artificial hypothesis, if you want).

    You wrote to Michael Slezak:

    The proposal to explain the thermodynamic arrow in terms of the effects of observers has an obvious flaw: it doesn’t explain why all observers have the same orientation in time. Why don’t some observers remember what we call the future, and accumulate information towards what we call the past? The answer can’t lie in the thermodynamic arrow, if that depends on the existence of observers.

    The answer to this lies in the fact that, if quantum mechanics is valid at all scales, a measurement is nothing else than an entanglement between the observer and the system to be observed. Namely, consider the unitary evolution that describes the observer that is performing a measurement: such evolution will start from factorized states of the observer and the system, and will evolve them into an entangled state of the two:

    (|spin up>+|spin down>)/sqrt(2) |Alice>
    evolves into (|spin up>|Alice sees up>+|spin down>|Alice sees down>)/sqrt(2)

    If a second observer looks at the measurement result (i.e. when multiple observers get entangled among themselves) their global entangled wave function (of the system + two observers) is such that they both agree on the results of any measurement:

    (|spin up>|Alice sees up>|Charlie sees up>+ |spin down>|Alice sees down>|Charlie sees down>)/sqrt(2)

    [This was pointed out by Hugh Everett in his PhD thesis. Although I don’t agree with many of his other conclusions, I think that this is quite uncontroversial.]

    This argument is based ONLY on the UNITARY (i.e. time-symmetrical) part of quantum mechanics, so that there’s no circularity in my explanation, namely, I’m not using a hidden underlying time arrow to explain the thermodynamic time arrow.

    That’s why, even though the quantum theory is entirely time symmetric, ALL the observers agree in which direction time is flowing, simply because of the way they get entangled by the unitary time evolution when they interact among themselves!!

    I hope this answers to your objection to my paper and I hope to be hearing back from you soon.

    All the best,

    Lorenzo Maccone


    Dear Lorenzo

    Thanks for your email, and for taking the trouble to clarify your argument for me. Let me try to develop my concerns a bit.

    First, let’s go back to the classical case. In your paper, you mention Borel’s argument, about the rate at which correlations accumulate as a result of interactions.

    Question: If the dynamics are time-symmetric, why should the accumulation of correlations be time-asymmetric? Why should interactions increase correlations ‘to the future’ but not ‘to the past’?

    As you may know, I think that these issues are relevant to the so-called interactionist proposal concerning the origin of the thermodynamic asymmetry, namely that the irreversible increase in entropy is caused by uncontrollable influences from the environment. I say to the interactionist: Why is it that influences coming in from the past result in entropy increase towards the future, whereas influences coming in from the future do not result in entropy increase towards the past? They say: It is because influences coming in from the future are precisely correlated, in just the way they need to be for entropy to decrease towards the past. I say: There’s the double standard — your argument that entropy increases towards the future needs to assume that influences coming in from the past are not correlated in the precise way they need to be for entropy to decrease, but that’s just assuming the conclusion you want to derive. And to cut a long story short, I want to say that the right answer in the classical case is that there is no relevant asymmetry in, or due to, the dynamics of interacting systems: the asymmetry of the 2nd law rests entirely on special initial conditions.

    Now let’s consider the QM case, and I set aside any objection to “no collapse” views.

    Your argument relies on the assumption that unitary evolution of interacting systems produces entanglement, but this is puzzling in precisely the same way as in the classical case: given that the dynamics itself is time-symmetric, why should it produce increasing entanglement in one direction but not the other? Where does decoherence get its time-asymmetry from? (In as sense, this is a QM version of Loschmidt’s puzzle: where does the asymmetry come from, given that the dynamics is time-symmetric?) I suspect that as for the 2nd law in the classical case, the answer must be that there is a very special initial state — and that it is simply being ASSUMED that there are not special final states of the same kind.

    If I understand you correctly, your proposal is that observers break the symmetry. I have several concerns about this idea, but let me concentrate on the issue about where the asymmetry of agents comes from, on your view. It seems to me that the argument from Everett you mention below assumes that the two observers have the same time sense — i.e., they do not disagree about which direction is the past and which is the future. I would like to see an analysis which doesn’t take this for granted, and which doesn’t assume that QM interactions cannot produce recoherence, as well as decoherence.

    If we grant the assumption all observers must have the same temporal orientation, then there is an interesting question about the status of this fact. Is it a kind of accident that a universe has observers with one temporal orientation rather than the other? Is it a kind of law (could physics have laws about observers)? Or can it be explained in terms of something else? The thought behind my comment to Michael was that in general, we find it plausible to explain the existence and temporal orientation of observers such as ourselves in terms of the thermodynamic properties of our environment; but this approach seems off-limits to you, since you want to explain the thermodynamic gradient in terms of observers.

    Anyway, that’s enough for now. Thanks again for taking the trouble to write to me.

    Best wishes


    Huw Price wrote:

    Dear Lorenzo Thanks for your email, and for taking the trouble to clarify your argument for me. Let me try to develop my concerns a bit. First, let’s go back to the classical case. In your paper, you mention Borel’s argument, about the rate at which correlations accumulate as a result of interactions. Question: If the dynamics are time-symmetric, why should the accumulation of correlations be time-asymmetric? Why should interactions increase correlations ‘to the future’ but not ‘to the past’?

    Dear Huw, thanks to you for your interest in my paper! Being familiar with your literature, I expected this “double standard” accusation! However, in my case I think it is without merit, as my theory IS completely time symmetric. In fact, correlations are increased both ‘to the future’ and ‘to the past’. The important thing to realize is that memory is a correlation (between the observer and the observed). So we retain a memory as long as the correlation is present. I DEFINE past for the observer as “the direction in time of which we have memories”. Although subjective, it is a good definition that captures well what we “feel” is past. Then whatever is the direction of time, correlations are increased only in “our” past… [In the last paragraph of the quant-ph version of my paper arXiv:0802.0438 I elaborate on this, but I had to cut it out for the PRL version.]

    Perhaps you have a different definition of “past”? Honestly, I can’t think of any other definition that doesn’t employ the 2^ law (which would then become a mere DEFINITION for the direction of time)…

    Already one of the paper’s Referees had asked about the symmetry in time of my theory. This is what I had answered him/her:

    Finally, regarding the Referee’s question of what would an observer traveling backwards in time see, my theory suggests that such an observer would still only see (or rather, remember) the events that lead to entropy increase. In the time-reversed universe, the information is created and erased at the same instants in which it is respectively erased and created in the direct-time universe: reversing the arrow of time, the events that lead to creation and to merging (erasure) of correlations are swapped. Observers still see only events where entropy has not decreased, namely where information has not been “erased”, although time is flowing in the opposite direction. The time symmetry of the theory requires this, and it is correctly recovered.

    As you may know, I think that these issues are relevant to the so-called interactionist proposal concerning the origin of the thermodynamic asymmetry, namely that the irreversible increase in entropy is caused by uncontrollable influences from the environment. […] And to cut a long story short, I want to say that the right answer in the classical case is that there is no relevant asymmetry in, or due to, the dynamics of interacting systems: the asymmetry of the 2nd law rests entirely on special initial conditions.

    I do not agree with that. My theory would suggest that whatever the initial conditions, the entropy would only increase towards the future, just because even if entropy-increasing and entropy-decreasing transformations were EQUALLY likely, it’s only entropy-increasing transformations that leave trace of their having happened. This is a purely quantum effect, so that you have to assume that quantum mechanics is valid at all scales. This is my (explicitly stated) premise.

    Of course if you are in a zero entropy initial state, only entropy-increasing transformations CAN happen (there’s nothing to disentangle), and viceversa if you are in a thermal equilibrium state of maximum entanglement, only entropy-decreasing transformations can happen (it is impossible to further increase the entanglement). So I’m NOT saying that initial conditions are irrelevant, I’m just saying that they do not determine the arrow of time direction.

    Now let’s consider the QM case, and I set aside any objection to “no collapse” views. Your argument relies on the assumption that unitary evolution of interacting systems produces entanglement, but this is puzzling in precisely the same way as in the classical case: given that the dynamics itself is time-symmetric, why should it produce increasing entanglement in one direction but not the other? Where does decoherence get its time-asymmetry from? (In as sense, this is a QM version of Loschmidt’s puzzle: where does the asymmetry come from, given that the dynamics is time-symmetric?) I suspect that as for the 2nd law in the classical case, the answer must be that there is a very special initial state — and that it is simply being ASSUMED that there are not special final states of the same kind.

    I think I’ve already answered to this above, but let me try again: entanglement is indeed produced in both time directions. Decoherence is usually termed as a one-way process, since one always starts from your assumption of low entropy initial state, namely, that it is more probable that entanglement is created during the measurement process than it is erased. Instead, what I point out is that decoherence is reversible! (at least if one looks at the big picture of the unitary evolution of the universe). Measurements can be undone (but the measurement result must be irretrievably erased). The reason why we don’t see “undone measurements” in our past is that, since the measurement result has been erased, we don’t think that that transformation was a measurement…

    If I understand you correctly, your proposal is that observers break the symmetry. I have several concerns about this idea, but let me concentrate on the issue about where the asymmetry of agents comes from, on your view. It seems to me that the argument from Everett you mention below assumes that the two observers have the same time sense — i.e., they do not disagree about which direction is the past and which is the future. I would like to see an analysis which doesn’t take this for granted, and which doesn’t assume that QM interactions cannot produce recoherence, as well as decoherence.

    Unfortunately I don’t have Hugh Everett’s thesis here with me (it’s cited as ref. [23] of my paper), but I recall that the argument that all entangled observers agree among themselves was very carefully laid out (it is essentially what I wrote you in my past mail). As I said above, I DON’T assume that QM cannot produce recoherence. This CAN certainly happen, but it won’t leave any trace in the environment (for the simple reason that any remaining trace in the environment is due to entanglement between system and environment and that will PREVENT any recoherence).

    If we grant the assumption all observers must have the same temporal orientation, then there is an interesting question about the status of …

    As I wrote you (following Everett’s argument) I don’t think it is an assumption, but a CONSEQUENCE of the linearity of QM.

    … this fact. Is it a kind of accident that a universe has observers with one temporal orientation rather than the other? Is it a kind of law (could physics have laws about observers)? Or can it be explained in terms of something else? The thought behind my comment to Michael was that in general, we find it plausible to explain the existence and temporal orientation of observers such as ourselves in terms of the thermodynamic properties of our environment; but this approach seems off-limits to you, since you want to explain the thermodynamic gradient in terms of observers.

    Let’s summarize: I define the observer’s temporal orientation as “past is the direction of time where memories are”. Then I relate this to the thermodynamic gradient by pointing out that (in QM) entropy is strictly connected to entanglement (correlation), which is strictly connected with memories. I emphasize that this is not true of classical mechanics.

    In addition, as you pointed out, it’s quite important that all observers agree to the temporal orientation, and I think that Everett’s argument shows just that.

    Thank you again for your careful thoughts on my paper. I hope to hear from you soon!
    All the best,

  • Pingback: » Blog Archive » Link Digest for August 26th()

  • boreds

    Dear Sean,

    I’ve always enjoyed your posts on the AoT, and this one has finally encouraged me to formulate a comment.

    I am wondering if there two distinct questions. First, why was the early universe in a relatively low entropy state? We don’t know, but we do know that it was, giving rise to what you called a phenomenological arrow of time.

    Second question. Why do teacups smash when we drop them but never spontaneously reform? Is this also explained by the universe being in a state of low initial entropy?

    I have always wondered if it is relevant that we have the sense of being able to set `initial’ conditions for an experiment, but not to fix final conditions, and that that is where the asymmetry comes from. Is the paper by Maccone telling us that we can only ever remember setting initial conditions for an experiment?

  • John R Ramsden

    @Aaron Sheldon [17] “At the risk of sounding trite, but the only thing that prevents wave function collapse is willful ignorance.”

    Trite maybe not, but glib more likely. Although your intended meaning can be guessed, a little more detail would have been helpful. As far as I’m aware, this blog is aimed at interested (and reasonably well informed) amateurs and students, as well as professionals.

  • wolfgang

    Lorenzo and Huw,

    when you say

    (|spin up>+|spin down>)/sqrt(2) |Alice>
    evolves into (|spin up>|Alice sees up>+|spin down>|Alice sees down>)/sqrt(2)

    do you not already make an assumption about the initial state of the world being quite special in this case? In other words, why is the wave function not ‘fully entangled’ to begin with?
    I guess this is similar to Sean’s objection, but in more general terms, if you use the Everett interpretation (MWI), then one issue that needs to be explained is why
    you can distinguish between |Alice> and |Bob> etc., which is related to the question of the preferred base. This may be resolved by decoherence, but then you already (implicitly) assume an arrow-of-time imho.

  • jr

    i remember someone distinguishing between the Measurement of time
    and the Phenomenon of time – the phenomemon does not seem to be
    vectorial but rather a sequence of transformations.

  • Notatheist

    Forgive me for inserting what will probably sound like ignorant queries into a fascinating debate, but I have a two part question (and yes I’m new here);

    1. Isn’t entropy necessarily subjective due to Relativity? If I am travelling at close to the speed of light relative to the rest of the universe, will the universe not appear to slow down, thus cool down and have more entropy then if I were travelling at a slower speed? Wouldn’t I see the universe speed up and thus lose entropy as I slowed down?

    2. I’ve recently read the Blackhole Wars by Leanord Susskind (sp?). In it he posits that we actually live on the 2 dimensional event horizon of the universe, and not in the 3d interior that we perceive. If this is true, isn’t it possible that we see increasing entropy because the universe is expending into thermal equilibrium? In other words, as matter anti-matter pairs fluctuate into existence near the event horizon and the matter particle falls into the event horizon of the universe, couldn’t this explain why we experience a net increase in entropy?

  • John-Paul

    I don’t see this “arrow of time dilemma” as a dilemma at all. I am having problems seeing how the direction of the arrow of time and the evaluation of entropy are not inherently intertwined. The initial unitary definitions of the system define the direction of the arrow of time and if you change the direction of the passage of time without changing the initial definitions of the other units then haven’t you have missed the first major step of reconciling the two mathematical systems which you are trying to equate. In my understanding “macroscopic entropy will always increase” is synonymous with “absolute values will always be positive”… but possibly I’m missing a few dozzen steps and possibly the underlying question. This seems like a thought experiment gone horibly wrong to me…

  • Sean

    Quick fly-by comments:

    * Leonardo, I wasn’t thinking of the H theorem. You don’t need that theorem to argue that entropy increases, *if* you begin with the assumption of a low-entropy initial (not final) condition.

    * Lorenzo, it doesn’t matter whether the state of the early universe is pure or not. The problem is that it’s a very finely-tuned state, one that looks macroscopically like a very small number of other states. That’s also weichi’s point.

    * I think Huw’s points are good, and that wolfgang hits the nail on the head: starting with an uncorrelated observer/system quantum state is implicitly a low-entropy boundary condition.

    * boreds, I think that the combination of Boltzmann’s understanding of entropy plus a low-entropy early universe is sufficient to explain irreversible processes in our everyday lives, including breaking teacups. Takes some steps to get there, of course.

    * Notatheist, these are big questions, but roughly (1) not really, and (2) the universe may be holographic, but that doesn’t change the basic puzzle that it started in an anomalously low-entropy state.

  • Michael Buice

    From Spiv: “Michael Buice: maybe I’m misinterpreting, you’re saying you can predict the future population statistics but not the past ones? Or visa verse?”

    Spiv, by asymmetry of population statistics I am referring to the fact that given a known configuration at one time, the inference about future configurations and past configurations is not symmetric. It is the difference in the questions “where can we go from here?” and “how could we have gotten here?”. The population statistics of the first could be described by a Fokker-Planck equation (for some systems) whereas the second would be the adjoint of that equation.

    As a quick example, this is important in pricing options. The Black-Scholes option pricing formula is a “backwards” equation which calculates the expected value of an option based on the future value of the underlying asset. You can’t price an option correctly by asking what the possible stock values will be given some current configuration. The conditional probabilities are asymmetric between the past and the future.

    Sean, if I understand your points correctly, you aren’t really concerned with the “arrow of time”, per se, which explains how and why entropy increases and was understood classically and now, quantum mechanically. Your issue seems to be the question “why is(was) the early universe was so damn special?”. Aren’t you mislabeling your question?

  • Sean

    I don’t think I’m mislabeling the question. I think part of the arrow of time is perfectly well understood: why, if we begin in a low-entropy state, entropy tends to increase. Another part is not understood: why we began in a low-entropy state. The interesting unsolved part of the arrow-of-time puzzle is why the early universe was special.

  • Michael Buice

    So you’re thinking of the “arrow of time” like induction. The induction step is solved, but we need to establish the basis step, i.e. explain the early universe.

    I can appreciate this, but it seems to conflate two separate questions in my mind, the first being the relationship between information and physical dynamics (which clearly explains temporal asymmetry, i.e. it has a natural direction) and the second the question of our universe’s configuration.

  • Sean

    The relationship between information and physical dynamics does not explain the actual temporal asymmetry we observe in our universe. To do that, we need a boundary condition. Which is not to say that studying the relationship between information and temporal evolution isn’t interesting or important; it’s just insufficient to explain the observed arrow of time, which was my point in the original post.

  • Michael Buice

    I don’t understand your comment about a boundary condition. As far as what we observe right now, our current configuration is sufficient. Implicitly, all of our observations are conditioned upon our current configuration and as far as the fundamental dynamics of the universe is concerned, this is enough.

    Shouldn’t we be introducing adjectives at this point for our arrows? You are worried about the so-called “cosmological arrow-of-time”, which is distinct from the thermodynamic arrow, aren’t you? This still seems to me to be a question of the configuration of our particular space-time and not a question of why we experience a directionality to the flow of time.

  • Jason Power

    I’m so glad that this is being discussed here – what luck that it’s on a blog that I frequent! – as I’ve been thinking about Maccone’s paper for a few days now and don’t have real, live people who know more than I do to discuss it with.

    I have a question about whether we can explain why the universe began with such a unique, low entropy, finely tuned state with what Maccone’s paper says. The claim that processes that leave records through entanglement are strictly entropy increasing doesn’t seemed to have been refuted. I wonder, then, it must be true that there can be no record of anything having occurred with an entropy lower than zero, as in anything happening before what we’d call t=0 (when the direction of the past is defined as Maccone – by our memory). What would be the confusion if this explanation is used for why the early universe is special? The early universe must have the lowest entropy, since anything that came before it must’ve had an even lower entropy. Is there no reason to conclude that a low-entropy beginning will be finely-tuned?

  • Sean

    Michael, our current condition is enough to predict the future, but not enough to reconstruct the past. I’m certainly interested in thermodynamic arrow of time. My point is that we wouldn’t have such an arrow that was consistent throughout the observable universe (or even consistent between me and you) if it weren’t for an early boundary condition.

  • boreds

    Hi Sean

    Is the connection between a low-entropy early universe and the irreversibility of processes in our everyday lives something you explore in The Book? I would look forward to seeing that argument laid out somewhere.

  • Sean

    I did my best to explain it, especially in Chapter Nine. Parts of the story are incomplete, which is why papers like Lorenzo’s are useful (even if I don’t think they constitute a complete answer).

  • John-Paul

    The early boundary conditions are the problem? With ~85% of the total system admittedly not understood or not accounted for is this really a prudent question to try to attempt to account for the entire system’s entropy?

  • Michael Buice

    I’m starting to understand your argument a little better, but I’m not there yet. Are you saying you believe that the arrow of time is explained locally but not globally? There could be a planet of Benjamin Buttons out there? I don’t see how this can be the case in a universe that is mostly flat.

    Or are you saying that the early universe’s entropy is a necessary component in explaining why gas molecules do not suddenly congregate in the corner of a room? I hope not because our present knowledge of stat mech and thermodynamics answers this question nicely as explained by Boltzmann, Szilard, & Jaynes.

    I don’t understand your statement about reconstructing the past. We can do inference in either direction and that inference is exactly consistent with thermodynamics. The early universe doesn’t affect this.

  • confused but well fed by this conversation

    Is it reasonable to say that:

    (a) metric effects influence the magnitude of the arrow of time but not its direction

    (b) if we reverse the arrow of time all geodesics reach the hot dense boundary (at which point with our current understanding they become untraceable)

    (c) subject to the relativistic speed limit, any observer in any frame will see directionality of the arrow arising from (b) no matter what slice of spacetime the observer is studying

    (d) the observer in (c) will note metric distortions to the point of (asymptotically) stopped clocks but will not see clocks ticking in the wrong direction (“backwards” in our case)

    consequently the hot dense boundary is a universal candidate for “time 0″ whether we are counting down towards it or counting up from it, no matter what frequency standards we are using?

    Universal candidates for anything are spooky.

    P.S.: JR @ 38 – ” [W]hy isnt time measured with a ruler instead of a clock?” — using geometrized units (for example, G = c = 1) we have a ruler useful for all dimensions, with marks at one second intervals. Because physical processes which produce frequency (in Hz) drive clocks, and metric distortions cause frequencies of nonlocal clocks to appear to run fast or slow, nonlocal rulers marked in seconds can appear to be too short or too long. “Local” here means that the observer and clock fully agree on the frequencies of everything they both can see, or alternatively that the observer and ruler agree on the lengths and areas (and their reciprocals) of everything they both can measure; this is a question of inertia (because of G and c) rather than spatial or temporal proximity.

  • Lorenzo Maccone

    Dear Sean, I think you lost me. I’m emphatically NOT just saying that the initial state of the universe is pure (which, as you say, wouldn’t be a satisfactory solution). I’m saying that the state of the universe is PURE AT ALL TIMES (at least from the point of view of the superobserver).

    As you point out yourself in your blog entry, the problem with the time arrow is not much that the initial state is a low entropy one, rather that it is a much lower entropy state than today’s. I give a quantum mechanism that shows how the universe’s entropy is CONSTANT:
    it was zero initially, and it is still zero now (at least from the point of view of the superobserver).

    Why it doesn’t appear so to us? Because we are entangled subsystems of the universe, so correlations between us and our environment give us the SUBJECTIVE impression that the universe is in a high entropy state. In addition I also gave an argument on why we feel that this entropy is also increasing.

    I hope I can recapture your attention to end this very interesting debate. Thank you,


    ps I apologize to the other people commenting in this blog: I’m trying not to increase the entropy of the conversation too much by limiting my replies to Sean and Huw’s questions, since I was discussing privately before with them.

  • Charles

    The tools we are using to address the problems here are human minds, which, as epiphenomena of some awesome chemistry, are a governed by thermodynamics. Does this place inherent limits on our ability to discuss this issue?

    Specifically, is dS/dt >= 0 telling us as much about the nature of time as it is about entropy? Does the *absolute* value of S as perceived by us tell us the absolute value of T?

    So: Given that there are more high-entropy states than lower-entropy ones, is the flow of time the same thing as a statistical progression to higher-entropy states? And is the perception of this being a progression from low time to higher time purely because we have thermodynamic minds?

    If this were the case, then we would expect to see that the past universe is in a lower entropy state, while this would not be true for a super-observer whose mind is not constructed of thermodynamic processes running within the universe? Such an observer would perceive the universe as a set off all the states that the universe has occupied, and would make no distinction between the ‘Absolute Time’ of a state and ‘Absolute Entropy’ of a state?

    Or is this all my utterly trivial misunderstanding?

  • Just Learning

    I still think people get confused about some simple points.

    Entropy is a measure of the amount of information needed to describe the microstate of a system (ie, the greater the entropy, the greater the amount of information to describe the exact microstate of the system).

    It is also a measure of uncertainty in that if I have a variable X that can take some value, if there is a higher probability of picking one particular value over the others, than I have less uncertainty and therefore less entropy. If I am equally likely to pick any value, then uncertainty is very high, and in fact has been maximized, and equivalently, so has the entropy.

    Of note this has some interesting implications in terms of our use of numbers in the real world, where because we favor small numbers in almost all human endeavors, the typical entropy of any set of random variables is small.

    In the physical world, what is particularly interesting is that if all particles can be described by the fundamental particles of the standard model (ie states of the standard model), and we place the additional constraint that the number of observable particles in the universe is always countable, then we can understand that the part of the universe we can observe at our energy scale is always in a state of low entropy, since any random observable particle is much more likely to be in its most stable state.

    Does this mean that entropy of the universe is decreasing as the universe cools?

    The answer is no because of the expansion of space and the intrinsic particle content of the vacuum itself, so although entropy of (relatively) strongly interacting observable particles might be decreasing, entropy of the relatively weakly interacting unobservable particles is steadily increasing.

    Granted this description ignores classical notion of particle position and momentum, however, even in QFT, where the number and type of particles are more important than classical variables, we still see that observable particles must have a lower entropy than the virtual particles that occur in loop diagrams that are responsible for vacuum polarization, simply due to natural constraints on observable particle momentum and center of mass position.

    Hopefully these comments will stir some thoughts.

  • weichi

    Just Learning,

    What are the specific “confusions” you are trying to address? I find your post interesting, but don’t see it’s relevance to the rest of the thread. What am I missing?

    Are you perhaps arguing that in the absence of expansion of the universe, entropy would actually be decreasing?

    “then we can understand that the part of the universe we can observe at our energy scale is always in a state of low entropy, since any random observable particle is much more likely to be in its most stable state.”

    So are you saying that 1 m^3 of empty space has higher entropy than 1m^3 of empty space that also contains a gas of protons? I would have thought that the space with the protons has higher entropy … just because you add some protons, you shouldn’t change the possible states for virtual particles in the vacuum, should you? Or am I being really dumb here? I only know a little baby QFT, so the later is quite possible.

  • Pingback: » Blog Archive » Link Digest for August 27th()

  • Just Learning

    No, because there is no equivalency between 1^m of empty space and 1^m of empty space with a gas of photons in the way you stated the question. However, if you restate the question by saying that the two systems must have equivalent energy, then the space without protons would have higher entropy since it isn’t constrained to define a certain number of degrees of freedom as protons.

  • Arrow

    Low initial entropy state of the Universe can be explained by everyones favorite anthropic principle – it takes a lot of different macroscopic states to have observers such as us evolve.

  • Pingback: — Latest From FriendFeed this week()

  • Leonardo

    “I wasn’t thinking of the H theorem. You don’t need that theorem to argue that entropy increases, *if* you begin with the assumption of a low-entropy initial (not final) condition.”

    Dr. Carroll,

    Could you please elaborate more or give a reference that explains the above statement?

    At some point one would need to connect the microphysics with entropy, and that is done via Boltzmann H theorem, that tells us that Boltzmann entropy have the right concavity for thermodynamics. I really can’t see how one could argue that S can only increase — i.e. derive the 2nd law of thermodynamics — with just a boundary condition. What I thought you were arguing before is that if one assumes both then the problem is solved.


  • David Coule

    Just a few quick remarks to add to the mix.

    Entropy was found to be a useful quantity in classical thermodynamics and
    later extended to classical general relativity where in some sense Black holes are the most entropic states.

    To answer Lorenzo, unitrary evolving quantum mechanics is only valid in idealized conditions: we are no more entangled systems than we are Bose-Einstein condensates or coherent states. That is why the quantum computer
    is so difficult to make.

    It might be the case that if everything was quantum mechanical you could
    take the Von Neumann entropy to be zero ( ignoring how to include the measurement process and quantum uncertainty) but then the entropy concept wouldn’t be a useful concept.

    When we have a quantum gravity theory it will be interesting to see how the
    entropy concept will be modified or some other quantity replaces it to
    explain the arrow of time.

    Incidentally, when you first introduce the concept of entropy increase you say it corresponds to “loss of information” i.e. you knew all the perfume was in the bottle then it spreads all over the room. Hawking’s original calculation of evaporating black holes is consistent with this: entropy increases and information is lost. I think people who claim information
    is not lost in black hole evaporation are likewise emphasizing quantum aspects where entropy appears trivially constant – but they nearly all
    want to still keep the entropy increasing during the evaporation process- which seems inconsistent with our concepts gleamed from classical systems.

  • jr

    If we consider cell division as generating binary trees in SpaceTime
    it would be rather odd to consider entropy and the special character
    of the initial gravitational state (Penrose). The ‘arrow’ is in the direction
    of more cells – if one is drawing a diagram – but we are not inclined to
    think that cells might go backwards in time, and perhaps become anti-cells.
    The diagram should not indicate that time is a degree of freedom where we can
    play with the direction of the arrows. We do not see the film run backwards and
    claim it is just as valid as running forwards, as it would with planetary orbits.
    The question is whether the Minkowski structure of spacetime says all that
    needs to be said about the phenomenon of time – can it be completely reduced to
    geometrical structure ? As long as there is scattering we need an operator, and
    that there is one operation after another, so we connect the arrows to make a
    world line of an object. So there is something in addition to the geometry, and
    more complicated than space and time being on the same footing, and thus
    puzzling about an ‘arrow of time’ . They are on the same footing as far as comoving
    frames are concerned but the ordering of events along a world line is an ordering
    of operators acting on the object – possibilities turning into actual events – and
    we draw the arrows to indicate the sequence of operators. It does not seem to be
    necessary to invoke increasing entropy to impose an order that nature otherwise
    does not care about.

  • SFJP

    Just my two cents in this very very interesting thread.

    I believe the disagreement between Sean and Lorenzo is not just about “subjective” versus “objective” understanding of entropy. It might be much more on what we call time.

    Sean seems to view time as an objective geometrical dimension and just ask why do we have entanglement/correlations only in one direction, call it past to future. Then logically he sees Lorenzo paper as solving nothing.

    Lorenzo might have a more timelessness approach, where time is a subjective construction of observers related to memory increase. He then explains that any increase of memory is related to entanglement, and that disentanglement cannot be memorized, so is never observed. Time as we live it is just a set of points/instants in a timelessness set of states which is ordered and made subjectively continuous by a uniform increase of entanglement/memory. In this view, Lorenzo explanation is then a real advance in our understanding of the continuous ordering of set of instants by the identification he makes between entanglement and memory increase.

    Much to think about anyway!

  • Louis Savain

    The arrow of time is still a puzzle because Thomas Kuhn was right. It takes major revolutions in science to change the minds of scientists. Many intelligent people (e.g., Karl Popper, Joe Rosen, etc.) understand time. They know that time is abstract. They know that it does not exist and that nothing can move or change in spacetime. They know that time travel is hogwash because time is not a variable, that time cannot change. They know that spacetime is a myth and that the arrow of time is an oxymoron.

    This is the reason that Popper compared Einstein to good old Parmenides who, along with his famous pupil, Zeno of Elea, promoted the idea that change is impossible, contrary to their own observations. In Conjectures and Refutations, Popper called spacetime, “Einstein’s block universe in which nothing happens.” Nobody in the physics community ever dared to contradict Popper because they know he would tear them a new one. I am not making this up. Check it out for yourself.

    Conjectures and Refutations:

    Nasty Little Truth About Spacetime:

  • Dieter Zeh

    I noticed this discussion a few days ago after I had been asked by a popular science journal what I think of the paper. I like its attempt of using universal quantum concepts throughout, but I do not agree with most of its conclusions. The journalist had asked me a few specific questions, the first three of them were: 1. What’s new about the work? 2. Does it get us any closer to solving the arrow of time dilemma? 3. Do you think quantum mechanics can help us resolve the arrow of time dilemma? Since I don’t like to criticize somebody anonymously, I shall here repeat my comment as a whole (some of it is equivalent to what has already been said in this discussion). My formulation in the first sentence may sound a bit harsh (sorry, Lorenzo), but I intended to inform the journalist about my true opinion:

    I have re-read the paper by Lorenzo Mascone, and this has confirmed my first reaction: “How could this paper ever be accepted by PRL?” Since asymetric facts are perfectly compatible with symmetric laws, there is actually no real dilemma or paradox. On a closer look, the paper is a bit tricky, though, and this situation may have confused the referees. The main conclusion (as I understand it) is that entropy only SEEMS to increase, since we cannot remember entropy-lowering phenomena. However, this explanation of the arrow of time – if true – would already presume an arrow to apply to our “historical brain” (memory of the past only).

    The essential “rigorous” thought experiment assumes that Alice’s lab is perfectly isolated from the arrow of the external world. This is unrealistic because of decoherence that must affect a macroscopic Alice, as the author admits at some point. A similar perfect isolation has been discussed for interference (two-slit) experiments with conscious objects: in order to show interference phenomena, these objects must forget their passage through the slits. So this part of the argument is unrealistic although consistent, even in the presence of external observers who register the experiment and who don’t have to forget anything that was measured. The second part of the claim, namely that phenomena with decreasing entropy cannot be remembered by external observers, is therefore unjustified.

    The author says in the first paragraph that irreversibility has been claimed to arise from decoherence. This is wrong: decoherence is an irreversible process that REQUIRES an arrow of time. In fact, he remarks at the end of page 3 (published version?) that correlations “build up” continuously, thus leading to decoherence.

    The entropy considerations in connection with Equ. (2) are not even specifically quantum. The sum of classical entropies of all subsystems is in general higher than that of the total system if it were calculated from a statistical ensemble for states of the latter. For example, Boltzmann’s entropy is DEFINED precisely from the independent particle distribution, that is, by neglecting all correlations, and its increase can be described deterministically as the transformation of information about the particles (negentropy) into that about corelations. Would you say that entropy appears to increase only since we cannot remember cases in which all particles hurry to concentrate in one corner of the vessel or where heat flows from the cold to the warm? On the other hand, entropy fluctuations in small systems can be well observed and remembered.The formalism for the dynamics of statistical correlations is precisely the same as for quantum entanglement.

    Well – this paper may be a bit mind-boggling, but I don’t think it is serious science. So my answer to your first two questions is negative. The answer to the third question is a partial “yes”: Quantum theory must be essential to correctly formulate the solution of the problem (probably as an appropriate cosmic initial condition for the universal wave function).

  • Just Learning

    Any evolution involves a process of steps that must be followed precisely in order to reach the exact current state. This means that the entropy of the string of variables that describes the evolution of a random process is extraordinarily high, ie there is a very high information demand in order to describe the history of a highly evolved system.

    To say that I have erased memory seems to be saying that a certain number of variables in a string of variables have been removed. This reduces the length of the variable string, and thus its history. This would imply that a shorter string of variables represents a shorter history.

    In this construction, the arrow of time is essentially related to the length of a variable string. To compare two strings, make the number of digits equivalent by using zeros to lengthen the shorter string:


    The entropy of the first string is much higher than the second string. The first string also has a longer history than the second since we can effectively ignore the string of zeros in the second string. This suggests any process that can truly erase history does reduce entropy.

    However, suppose I write my variable chain like this:


    In this case, I have changed the initial conditions of the shorter string. In this situation I would argue that this third string is actually equivalent to the first string since I am starting with an initial condition that is in fact highly evolved, and thus encodes an implied history (ie a string of variables of length J)

    What this suggests is that there is an implied history in our choice of initial conditions, and we can not pick our initial conditions arbitrarily since some initial conditions are equivalent to systems with high entropy.

    This means we can always time order our initial conditions.

  • Scott W. Somerville

    Dear Sean:

    Ken, at Open Parachute, is a big fan of yours. He suggested I ask you the following question, and referenced this thread. Here’s the question I have asked at “Physics Forums”

    How would particles behave if “unobserved” systems could go backwards in time as well as forward? I label this notion the “pendulum of time.” In essence, a system of entangled entities would undergo some sort of “random walk” in time and space, with no more preference for a path “forward” in time than for a preference to go “eastward” in space.

    I imagine that such a system would “oscillate” in time and space until something caused it to decohere. After decoherence, the system would begin to oscillate again, moving forward in time and then back to the moment of decoherence, like a pendulum that swings back and forth and side to side, but always goes past the bottom of the swing, which, in this model, is the previous moment of decoherence. The “pendulum” could not revert to a point in time BEFORE the decoherence, but it could “explore” all physically possible paths AFTER it. (Sorry to use English to describe something that would be unambiguous as a mathematical formula, but that’s why I’m here asking for help.)

    My core question is whether such a “pendulum of time” would be consistent with double-slit experiments. As I understand the findings of modern physics, a particle passing through two slits exhibits a wave-like interference pattern. I am guessing that an unobserved particle in a “pendulum of time” model MIGHT produce exactly the same interference pattern. An observed particle, by contrast, would decohere after a single “swing” and would not produce an interference pattern.

  • Sean

    Scott– Sorry for being away from the thread for a while. I’m not sure how to answer your question, because I don’t really know what it would mean. In particular, I don’t know what it means to “go backwards in time as well as forward.” In conventional physics, objects exist exactly once at every moment of time (if they exist at all). The “direction in which they go” is set by convention, although it’s often convenient to use the direction of increasing entropy to determine that convention. You would need a radical departure from conventional physics, and one that would need to be spelled out in much greater detail, to make sense of the notion of “oscillating in time.”

  • jr

    perhaps the secret to time is that particles are oscillators. Velocity is not the only
    way to meld space and time. Without particles, all we have is geometry – hence
    the question of why isn’t time measured with a ruler ? If algebra defines both
    spacetime and oscillators we would be in business – we would not want to put in
    this sort of stuff by hand. Then we might sharpen exactly what we mean
    by an ‘arrow of time’.

  • Joseph J

    Maybe a lesser mind and a keener eye is needed. The Time reversal at the quantum scale maybe an illustion. I am not fimiliar with the fore mentioned experiment but the quantum state is a small (pun intended) part of the real world that is not reversible. When two cars pass each other while going in the opposite direction their arrow of Time only appears to be reversed. They both are traveling into the Future even though they are traveling in different directions.

  • Scott W. Somerville

    Sean, I’m a layman who can’t express myself mathematically, but let me try to clarify what I mean by “go backwards in time.” It doesn’t need a radical departure from conventional physics–just a willingness to get serious about treating time the same as space.

    Start with an extremely simplified “unobserved system” that consists of a single particle moving at some initial velocity towards two slits, with some kind of screen on the other side that can detect the particle. Instead of treating time as the independent variable and three spatial dimensions as dependent variables, specify a new parametric variable “p.” The particle’s position and velocity are defined at p=0. Let the particle’s position and velocity at p=1 be randomly selected from all possible states within the limits of Heisenberg’s uncertainty principle, and then repeat in a “random walk” pattern without any preference for increasing t. Continue this until something causes the system to “decohere.”

    If we were to plot the path of this paramaterized particle in x, y, and t we would see a “fractal” shape–I think it would look a bit like an old-fashioned shaving brush.

    As far as I can tell, this method would generate the interference patterns one sees in a double-slit experiment. In a system with a detector on one of the slits, the system would decohere the first time the p parameter randomly got the particle out to the detector. The moment of decoherence would reset the system with the particle now located near one slit, and all possible paths of that particle would now wind up on the screen at the far end looking just like a single particle going through a single slit.

    By contrast, a system without a detector at one of the slits would yield an infinite number of different paths from the original starting point to the screen at the far end–but those paths would show just the kind of interference patters that make double-slit experiments so interesting.

  • Scott Somerville

    The folks at Physics Forum have provided a link to this article on Time Symmetry which LOOKS like it’s working in the same general area. They don’t characterize the physics as a “pendulum of time,” but they’re asking the same questions and getting to some of the same answers I’ve been groping towards. The problem is, they’ve written 58 pages of math I can’t follow. Can you decipher this for us laymen, Sean? (Maybe a whole new post, hint, hint?)

  • DaveK

    @Louis Savain: You might want to check your sources there. Granted, this is an “ad-hominem” attack, but from the website, you might want to check out this link:

    Also this one:

    Who needs Einstein when you’ve got Isiah?

  • Aaron Bergman

    Louis Savain?!?

    Wow, that brings back spr flashbacks….

  • Pingback: The question of the arrow of time « The Gauge Connection()

  • Marco Frasca

    About the question of the arrow of time, I have always wondered why a beautiful result as that due to Elliot Lieb and Barry Simon has been always overlooked. This is a theorem implying that quantum mechanics manifests an instability when the number of particles goes to infinity with the system that loses quantum coherence. The situation devised by Lieb and Simon is similar to the one seen in thermodynamics and so, it is really effective when quantum fluctuations become smaller and smaller increasing the number of particles. This situation is not always true and, indeed one observes large scale coherence.

    Such an instability seems to support Lorenzo’s view and what Zeh above claimed essential to understand arrow of time, that is quantum mechanics. Indeed, studies on Loschmidt’s echo can also give an experimental support to Lieb and Simon theorem even if, in this particular case, there are other competing views worthwhile to be pursued.

  • boreds

    Sean, thanks for your response, and I will check out that chapter when I can.

    For what it’s worth, I think the connection between low entropy initial conditions for the universe, and the sense of being able to fix initial conditions for local experiments would make for an awesome blog post.

    I’ve no problem with the mystery of the low entropy early universe—it’s a mystery and something maybe we can figure out. But why do local experiments have the same arrow of time? I think it’d be a fun blog post to explore that and explain where there are gaps.

  • jr

    it is interesting that nature provides unstable particles , and we want
    to create before we annihilate – that recognizes an arrow of time.

  • r721

    There’s a new paper on the subject:
    Comment on `Quantum resolution to the arrow of time dilemma’

    Recently, a substantial amount of debate has grown up around a proposed quantum resolution to the `arrow of time dilemma’ that is based on the role of classical memory records of entropy-decreasing events. In this note we show that the argument is incomplete and furthermore, by providing a counter-example, argue that it is incorrect. Instead of quantum mechanics providing a resolution in the manner suggested, it allows enhanced classical memory records of entropy-decreasing events.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] .


See More

Collapse bottom bar