From Eternity to Book Club: Chapter Eight

By Sean Carroll | March 2, 2010 8:54 am

Welcome to this week’s installment of the From Eternity to Here book club. Finally we dig into the guts of the matter, as we embark on Chapter Eight, “Entropy and Disorder.”

Excerpt:

Why is mixing easy and unmixing hard? When we mix two liquids, we see them swirl together and gradually blend into a uniform texture. By itself, that process doesn’t offer much clue into what is really going on. So instead let’s visualize what happens when we mix together two different kinds of colored sand. The important thing about sand is that it’s clearly made of discrete units, the individual grains. When we mix together, for example, blue sand and red sand, the mixture as a whole begins to look purple. But it’s not that the individual grains turn purple; they maintain their identities, while the blue grains and the red grains become jumbled together. It’s only when we look from afar (“macroscopically”) that it makes sense to think of the mixture as being purple; when we peer closely at the sand (“microscopically”) we see individual blue and red grains.

Okay cats and kittens, now we’re really cooking. We haven’t exactly been reluctant throughout the book to talk about entropy and the arrow of time, but now we get to be precise. Not only do we explain Boltzmann’s definition of entropy, but we give an example with numbers, and even use an equation. Scary, I know. (In fact I’d love to hear opinions about how worthwhile it was to get just a bit quantitative in this chapter. Does the book gain more by being more precise, or lose by intimidating people away just when it was getting good?)

In case you’re interested, here is a great simulation of the box-of-gas example discussed in the book. See entropy increase before your very eyes!

Explaining Boltzmann’s definition of entropy is actually pretty quick work; the substantial majority of the chapter is devoting to digging into some of the conceptual issues raised by this definition. Who chooses the coarse graining? (It’s up to us, but Nature does provide a guide.) Is entropy objective, or does it depend on our subjective knowledge? (Depends, but it’s as objective as we want it to be.) Could entropy ever systematically decrease? (Not in a subsystem that interacts haphazardly with its environment.)

We also get into the philosophical issues that are absolutely inevitable in sensible discussions of this subject. No matter what anyone tells you, we cannot prove the Second Law of Thermodynamics using only Boltzmann’s definition of entropy and the underlying dynamics of atoms. We need additional hypotheses from outside the formalism. In particular, the Principle of Indifference, which states that we assign equal probability to every microstate within any given macrostate; and the Past Hypothesis, which states that the universe began in a state of very low entropy. There’s just no getting around the need for these extra ingredients. While the Principle of Indifference seems fairly natural, the Past Hypothesis cries out for some sort of explanation.

Not everyone agrees. Craig Callender, a philosopher who has thought a lot about these issues, reviewed my book for New Scientist and expresses skepticism that there is anything to be explained. (A minority view in the philosophy community, for what it’s worth.) He certainly understands the need to assume that the early universe had a low entropy — as he says in a longer article, “By positing the Past State the puzzle of the time asymmetry of thermodynamics is solved, for all intents and purposes,” with which I agree. Callender is simply drawing a distinction between positing the past state, which he’s for, and trying to explain the past state, which he thinks is a waste of time. We should just take it as a brute fact, rather than seeking some underlying explanation — “Sometimes it is best not to scratch explanatory itches,” as he puts it.

While it is absolutely possible that the low entropy of the early universe is simply a brute fact, never to be explained by any dynamics or underlying principles, it seems crazy to me not to try. If we picked a state of the universe randomly out of a hat, the chances we would end up with something like our early universe are unimaginably small. To most of us, that’s a crucial clue to something deep about the universe: it’s early state was not picked randomly out of a hat! Something should explain it. We can’t be completely certain that such an explanation exists, but cosmology is hard enough without choosing to ignore the most blatant clues that nature is sticking under our noses.

This chapter and the next two are the heart and soul of the book. I hope that the first part of the book is interesting enough that people are drawn in this far, because this is really the payoff. It’s all interesting and fun, but these three chapters are crucial. Putting it into the context of cosmology, as we’ll do later in the book, is indispensable to the program we’re outlining, but the truth is that we don’t yet know the final answers. We do know the questions, however, and here is where they are being asked.

CATEGORIZED UNDER: Time, Words
ADVERTISEMENT
  • Philoponus

    Sean, suppose we did a log-normal graph of the values on your chart on page 152. The y axis is log W (k) and x axis the 2000 arrangements of the system. That would give us a nice symmetrical distribution with a y maximum at 600.3. The entropy of the system for every molecular arrangement (p,q) is a point on this S-distribution. We know the entropy of the system will fluctuate but almost always stay close the maximum. Over some intervals of time entropy will almost certaintly decline temporarily. That can’t be what the 2nd Law forbids. What it forbids must be that the shape (“moments”) of the S-distributions will not change. In particular, that S maximum will never decrease. Is that right? So the entropy of a closed system refers to a distribution whose maximum (the 2nd Law says) can’t decline?

    .

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    It’s not that the 2nd Law prohibits fluctuations downward in entropy from equilibrium; Boltzmann’s picture predicts that such fluctuations will certainly happen, as will be discussed in Chapter Ten (p. 212). The 2nd Law is not any sort of statement about fluctuations around equilibrium; it’s the statement that if you start with entropy much lower than equilibrium, the entropy is overwhelmingly likely to increase, as illustrated in Fig. 43. Plus, of course, the extra ingredient that our universe actually has an entropy much lower than equilibrium, which follows from the Past Hypothesis.

  • http://www.7duniverse.com Samuel A. (Sam) Cox

    The mixed sand analogy is really good. One thing becomes another when we change our frame of reference. The purple color is a product of observing mixed sand from a distance…a very simple but profound idea.

    I think it is worth noting that everything in this illustration is couched in space and time- entropy included. Mixing requires time and the mixture must be observed in a certain way for it to be “purple” (it really is purple, of course…from that frame and observed electromagnetically).

    This is really quite a book! By the way, I don’t agree that attempting rational (and workable) explanations for what we find in our reality is quite the equivalent of “scrathing an itch”.

    What is, is (not an insignificant tautology). Mankind has been coping or attempting to cope with “what is” since mankind became mankind. All life adapts.

    Stiving to understand the universe and building technology on an increasingly complete understanding of “what is” is a challenging and amazing journey…a journey even the most “primative” humans instinctively engage in.

  • Clifford

    It is hard for me to accept that the fact that the universe started in a low entropy state has any implications on my memories. Your argument is a very good read; but with all due respect, intuitively something must be wrong. My brain seems far too tiny relative to the mass of the visible universe.

    Moreover, my mind supervenes on my brain which supervenes on neurons which supervene on chemistry which supervenes on physics. There are at least a few levels there. Each time we transition to another level, there is a different useful definition of entropy. I guess I am saying that for each level removed from fundamental physics, I would think that a robust fundamental definition entropy becomes less and less significant to the point of irrelevance.

  • Aaron Sheldon

    Aside from the conceptual difficulties with entropy there are a few mathematical issues.

    1. It is statistically tautological to say entropy will increase. In terms of the statistical definition of entropy one is saying that the most likely observation is the most likely observation.

    2. The standard definition of entropy does not have a well defined limit to continuity, one has to use the Kullback-Leibler Divergence which actually compares two distributions, which leads to messy interpretations as to the meaning of the reference distribution.

    3. In the standard example of an ensemble of particles with a fixed energy, the time evolution leads to all the particles having nearly the same energy, which maximizes the entropy when counting quanta of energy assigned to each particle, but minimizes entropy when counting particles assigned to each energy state.

    These difficulties puzzle me.

  • Charlie

    Coming from biology, this reminds me of Origin of Life studies.

    Maybe it was an extraordinarily unlikely event.

    Maybe it involves specific (not yet known) steps that make it a little less unlikely.

    If you insist on the former, then you are done. No need to make any further hypotheses or do any science. If you insist on the latter, then you have a job trying to figure out what those steps might have been. Either might be true, but only the latter will (or might) lead to productive research.

    (Of course, spontaneous life now seems positively probable to me after reading about spontaneous universes.)

  • http://www.shaky.com Timon of Athens

    *Surely* there must be more to Callender’s position than just “some itches don’t need to be scratched”? He may well be a respected philosopher, but this idea is just so ridiculous that I feel the need to search for a deeper explanation of his position. Or should I not try to scratch that itch, and simply accept the fact that philosophers sometimes just like to make fools of themselves?

  • Pingback: From Eternity to Book Club: Chapter Eight « Thoughts About Changes In Time()

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Clifford– If the universe hadn’t started out in a low entropy state, with overwhelming probability the universe would be close to equilibrium and you would be a random fluctuation. From that, everything else basically follows. We’ll talk a bit more in the next chapter.

    Aaron– It’s not a tautology to say that entropy will increase, because it’s not even true. I suspect that you are using a different definition of entropy than the one I’m using in the book (S = k log W). No definition is right or wrong, but this is the best one to use if you want to talk about the arrow of time in the real world.

    Charlie– You could imagine that the beginning of the universe was simply an unlikely event. The problem is, it’s far more unlikely than it would need to be for any anthropic (or other known) criterion. So some extra explanation seems to be called for.

    Timon– You can read the longer paper, linked to in the post, and decide for yourself. I agree with Callender that there are likely to be some brute facts about the universe that don’t have a “deeper” explanation. I just disagree that the low entropy of the Big Bang is one of them, as does almost every other philosopher who has written on the subject.

  • Ray

    I think I agree with Clifford that intuitions about memory need a more detailed explanation than “you couldn’t have an asymmetric memory if entropy wasn’t low in the past.” which is how I read Sean’s response. I’d like to see more along the lines of explaining how entropy is sufficient, not just necessary, to explain all the salient features of memory that we observe.

    Some subtleties that I think need to be addressed include:
    1) what do we really mean that we remember the past but not the future? How is remembering the past different from using science to predict the future (weather forecasts, predicting eclipses etc.) ?

    2) It would be nice to think of memories as a degraded remnant of structured information about events in the past, and then to conclude that since there is more structure in an event than a memory of that event, turning the event into a memory would be an increase in entropy, but turning the memory into an event would not. However, we know that the cooling and expansion of the universe does allow structure (nonrandom physical information) to increase even as entropy (random physical information) increases.

    The closest thing I, personally, can come up with to an explanation — in an expanding universe, entropy considerations do not forbid a computer running a Laplace demon type program, computing the history of the universe as a whole, but they do forbid the computer from calculating the state of the universe before that state is realized by the universe itself. And, I guess you can sort of think of this computer as a super-memory since it keeps a *complete* record of the past down to the smallest atom.

    Oh well. I do agree with Sean’s general point that the reason the past is different from the future ultimately boils down via entropy to the cosmological arrow of time. But filling in the details is harder than it looks. I think a full explanation will need to give a description of what our memory is really doing, explain how that process looks different in reverse, explain why the reverse is physically impossible or improbable, and explain how the forward direction is both possible and a probable consequence of Darwinian evolution.

  • Aaron Sheldon

    The definition is the standard generalization: -sum p log p, which is additive over dimensions and is maximized when the variance is maximized, for countable distributions that have a finite second moment. If you are microstate counting then the value of the bulk property with the most microstates is the most likely, especially when all the microstates are equally weighted.

    One has to be exceedingly careful when making claims about measures (distributions) on infinite dimensional spaces, especially when one is making a claim based on a limit of finite dimensional spaces. In an infinite dimensional setting things like norms (distances, topologies) will not agree with things like measures (volumes, probabilities) the way they do in finite dimensional spaces.

    Were the other two points dismissed out right?

  • Graham

    What is the entropy of antimatter? I seem to remember Feynman saying that antimatter was like regular matter reversed in time. So is its entropy “arrow” consistent with time going forward or backward? What does the 2nd law say about it?

  • http://www.astro.multivax.de:8000/helbig/helbig.html Phillip Helbig

    The question as to why the universe started in a low-entropy state is probably the most important question in science right now. Many folks have picked up on it in the last 10 years or so, but Penrose was pointing this out more than 30 years ago.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Ray– We should wait until the next chapter, when we talk about memory in a bit more detail. But not that much detail, I admit. However, the expansion of the universe doesn’t have anything to do with it; the space of allowed states doesn’t expand, and that’s what matters.

    Aaron– I’m not using that definition, since there’s no need to for any of what I’m discussing. We proceed by assuming that the universe is in some particular microstate, even if we don’t know what it is, and the entropy is a property of the macrostate to which it belongs. Also, none of the relevant spaces are infinite dimensional. (Even if we’re doing quantum mechanics, you can describe what happens within a comoving patch of space with a finite-dimensional Hilbert space.) So these issues just don’t apply.

    Graham– There’s not really any difference between antimatter and any particular species of matter, as far as entropy is concerned. Also, there’s very little antimatter compared to matter in the observable universe.

  • Jason A.

    Looking at your graph on page 177 with the low entropy spike, my immediate thought is that spike would correspond to the big bang being a statistical fluctuation. You write about how we can’t accept such a spike because our memories would be unreliable, but I don’t see how that’s a problem if our memories are only unreliable ‘before’ the big bang. We could still make sense of everything after the big bang.
    You allude to talking more about this in the next chapter, which I haven’t got to yet, so if the answer is ‘keep reading’ then that’s fine.

  • CF

    “If we picked a state of the universe randomly out of a hat, the chances we would end up with something like our early universe are unimaginably small.”

    This has always puzzled me, as I don’t see why, given the big bang, such a state is seen as so unlikely. Certainly, if we leave the big bang out of it, and just consider all possible states for the early universe, the chances of something like our early universe being selected is extraordinarily unlikely. But would it be possible for the big bang to not have had low entropy? To my mind, if it didn’t, the big bang would no longer resemble anything like a big bang. That is, it seems to me that given the big bang, laws of physics and constants, such a low entropy state is to be expected. The more pertinent question becomes how the big bang?

  • drm

    Re Aaron Sheldon’s third point, isn’t it just a matter of there being many more quanta than particles? Or am I not understanding the problem.

  • Metre

    Some thoughts (questions?) on the “principle of indifference”. In a gas, the forces between particles are neglible, so the principle seems intuitively OK. In a system of gravitating particles, however, the attractive force seems to give preference to states with lower net potential – i.e. states that are more collapsed toward the center of mass. Hence it seems the principle does not apply to gravitating systems.

    Even in a gas, a spread out configuration seem more probable than a concentrated one (all particles in one corner of the box) if for no other reason than the mean free path is larger, so the number of collisions is reduced. When you try to concentrate the particles into one corner, the mfp becomes small and the number of collisions goes up, making such a configurations less likely than a more spread out one?

  • Ray

    “the space of allowed states doesn’t expand, and that’s what matters.”

    I’m confused by this sentence. Did you mean to say the opposite, or are you accusing me of assuming some kind of non-unitarity?

    Anyway, I didn’t mean to imply that the entire space of states for the universe is expanding (this probably doesn’t even make sense, since the universe in the broadest sense is probably infinite.)

    I meant that in a given Hubble volume there are more and more allowed microstates whose macrostate is consistent with the general story: “the universe has been expanding since the big bang, and the initial fireball didn’t contain any large black holes.” (Large black holes in the early universe would foil the Laplace Demon program, since there’s no way for the demon to know what’s inside of them until they evaporate — which takes a long time.) The point is that this simplifying assumption restricts the space of allowed states more, the closer you get to the big bang.

    As far as the expansion of the universe being important, I suppose the Laplace demon argument would still work if the simplifying assumption was something else, but in our case it does seem that it ultimately derives from big-bang cosmology — so I don’t see why it isn’t relevant. Didn’t you have an entire chapter on it?

  • Aaron Sheldon

    I’ll see your finite Hilbert space, and raise you a trump card of: the classical observables of momentum and position do not exist on finite Hilbert spaces. Actually, technically their Lie commutator cannot be unital (take the trace of the commutator, zero on one side, a constant on the other).

    So if you are not working with the entropy and energy of momentum and position, then what are you working with?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Jason A.– Actually that issue is covered in great detail two chapters from now (Ch. 10). A very low spike could be the Big Bang, but the probability would be enormously greater that we would live in a much smaller spike.

    CF– You could very (very) easily have had a Big Bang with much higher entropy. It would have been extremely inhomogeneous, not at all smooth. More later on this, as well.

    drm– It’s hard to think of the physical system describing the universe as being “at fixed energy” when we take gravity into account. See previous post!

    Metre– The existence of gravity changes the way you would naively count states. When things are bunched together, there are actually more states of that form than if things were spread randomly. That’s not completely surprising; a similar thing happens in oil and water, where there are more states when the two liquids are separate than when they are fully mixed.

    Ray– Yes, I’m accusing you of non-unitarity. Of course “there are more and more allowed microstates whose macrostate is consistent with the general story,” if by “the general story” you mean the kind of evolution we actually observe — that’s just a restatement of the reality of the Second Law. But it’s not right to exclude highly inhomogeneous states by fiat, or just because information would be hidden behind horizons. This is Part Four kind of stuff, but the underlying assumption is that the full evolution is completely unitary, even when gravity is taken into account. So for every possible microstate in the current macrostate of the universe, there is exactly one microstate of a much denser (higher Hubble parameter) universe from which it could have evolved — that’s the content of “unitarity.” Most of them would have white holes and wild inhomogeneities. But even without such exotica, there are still a lot of very lumpy states that are inconsistent with the extreme smoothness of the early universe as we find it.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Aaron– At this point I’m just working in a classical approximation, so it doesn’t matter. Of course behind that is some quantum model. If I used that language, obviously we wouldn’t talk about positions and momenta, but about wave functions.

  • Metre

    Like Ray above, I too was a bit confused by the statement:

    “the space of allowed states doesn’t expand, and that’s what matters.”

    Suppose I squeeze a gas into a small volume in a cylinder with a piston then let it come to equilibrium. Then I pull the piston out rapidly (rapidly expanding the volume). The gas is now concentrated at the bottom and no longer in equilibrium because the space of states (maximum allowable entropy) has increased. The gas will expand into the volume until it comes to a new equilibrium at a much higher entropy. By pulling the piston out rapidly, I did not change the entropy of the gas, but I changed the maximum allowable entropy, so the actual entropy was now low wrt the new maximum.

    The universe was initially squeezed up into a singularity (or something close) like the gas in the piston at maximum entropy for that configuration. The big bang acted like a sudden pulling out of the piston, rapidly increasing the maximum allowable entropy. The actual entropy of the universe didn’t change, but it was no longer at maximum; it was low wrt the new maximum. Obviously, your statement disagrees with this view. I haven’t read the final chapter of the book yet, so I don’t know what your answer is.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Metre– That’s right about the piston, but only because it’s an external influence, not part of the system itself. The same logic doesn’t apply to the Big Bang, because the expansion of the universe is governed by the metric, which itself a dynamical degree of freedom. You have to take the gravity into account when counting the states.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    But, I should add: see this recent paper by Brian Greene and collaborators for an attempt to take advantage of exactly the kind of logic you are using. (They need to introduce external parameters.)

  • lemuel pitkin

    Here’s something that’s been puzzling me. You write,

    “If we picked a state of the universe randomly out of a hat, the chances we would end up with something like our early universe are unimaginably small. To most of us, that’s a crucial clue to something deep about the universe: it’s early state was not picked randomly out of a hat!”

    And then you write,

    “A very low spike could be the Big Bang, but the probability would be enormously greater that we would live in a much smaller spike.”

    So it seems that the same logic — preferring higher probability to lower probability cases — that leads you to reject the idea of the Big Bang as a brute fact, should also lead you to believe that we are in fact living in a smaller spike: if not a Boltzmann brain, then a Boltzmann solar system or galaxy or Hubble volume, and some future observation will reveal thermal equilibrium outside. Obviously you don’t believe this; no one does. But that implies the preference for high-probability cases is not absolute, there’s some other principle that trumps it.

    So my question is: Why is the principle of preferring high-probability cases strong enough to make you confident that the Big Bang is not a brute fact, but not strong enough to make you believe that you are living in a Boltzmann bubble?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    That would be true, if the correct scenario of the universe were that we were fluctuating around an equilibrium state. My strategy is to reject that whole scenario, and look for one where environments like ours arise with high probability (compared to other anthropically allowed environments).

  • Ray

    “that’s just a restatement of the reality of the Second Law.”

    Well, if it’s a choice between denying unitarity or restating the Second Law, I’ll opt for the latter. I disagree that I’m “just” restating the Law though. There are plenty of low entropy initial conditions that would not allow for the evolution of dynamical processes like human memory — say a perfectly symmetric arrangement of millions of Windows installation CDs whose velocities are so aligned to collide and form a black hole at some point in the future. You’d get the second law out of this, but not much else.

  • Aaron Sheldon

    But it is the quantum world that gets us in hot water, so to speak. With the canonical broken tea cup, in classical mechanics we can know all the positions and momentum of the electrons, and then run time backwards, or if the system is properly closed, allow time to run forwards long enough, and viola the cup reforms. But in the quantum world not only can we not know momentum and position precisely, we can only ever know scattering cross section probabilities which are time symmetric, so that if we allow time to run backwards the cup just continues to crumble.

    I used to think I didn’t know enough mathematics, or physics, or wasn’t as bright as the best minds, because I didn’t understand how time evolution and classical reality emerged from the limit of quantum mechanics. But as the recent phaphing about in the theoretical community has demonstrated I can quite confidently say that no one understands where classical reality and time evolution comes from and how it emerges from quantum mechanics.

    Take for example the oft repeated Schroedinger cat. There are three cheats, or slights of hand involved in it: First it is phenomenally difficulty to get a system that big to be that isolated, just getting photons to that degree of thermal isolation requires extraordinary lengths. Second for a coherent state to emerge can take quite a long time, so for the cat to be both dead and alive might require waiting longer than the lifespan of the cat. Finally and most important is that even when you open the box you are not observing dead or alive, but rather, alive, dead one minute ago, two minutes ago, three…There are many other observables available that can allow for an autopsy of the cat.

    All this points to important gaps in our understanding quantum theory in the limit of both large numbers of observables, and the large eigenvalue limit.

  • Craig

    I have enjoyed reading your book. Thanks for writing it.

    I wonder if there any tension between the theories of relativity and the theories of entropy? Some ways of measuring entropy involve counting states and I wouldn’t think counting is affected by relativities. But classical definitions of entropy involve energy and temperature, and both of these involve kinetic energy. And since kinetic energy involve mass and velocity, I would think such measurements would depend on the reference frames of the measurers. People traveling at different speeds might measure different entropies for the same situation. Is this a problem?

    I’m also curious about why you don’t mention or explain the units (joules per degree Kelvin, according to my college chemistry textbook) that entropy is measured in. All the mentions of entropy in your book were unitless, if I’m remembering correctly.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Entropy has the same units as Boltzmann’s constant — energy per Kelvin, as you say. And I do talk about it a little bit. But it’s easy (and very common) to simply use units where Boltzmann’s constant equals unity, and the units go away. It’s just a conversion between energy and temperature.

  • http://canonicalscience.org Juan R.

    Some remarks are needed here.

    (i)
    S = k lnW is not “the definition” of entropy. The proper definition is

    S = -k Tr ρ ln ρ

    For an isolated system at equilibrium ρ= ρ_eq is given by ρ_eq= 1/W and then the above definition reduces to the equilibrium form S_eq = k lnW.

    The universe as a whole is not in a state of equilibrium, and the expression S_eq = k lnW does not apply to it.

    (ii)
    One would do a strong distinction between thermodynamic entropy S and informational entropy I. Thermodynamic entropy is a physical quantity that has little to see with subjective informational entropies.

    The irreversibility that we observe in Nature around us is independent of the level of coarse graining. Paper ages with independence if we observe it or not. It ages exactly the same if we describe the process macro, meso, or even nanoscopically. In fact, this is a known paradox of the informational approaches to entropy as a measure of ignorance.

    (iii)
    The principle of indifference is a principle in equilibrium statistical mechanics because this result cannot be obtained from mechanics. As a consequence you only can postulate it in equilibrium statistical mechanics).

    There is not such principle in non-equilibrium statistical mechanics (NESM). And in fact, we cannot assign equal probability to every microstate within any given macrostate for nonequilibrium states.

    Within the framework of NESM, the principle of indifference ρ_i = ρ_j for all i and j is a theorem which can be proved for equilibrium. I.e. NESM provides the foundation for the
    principle of indifference postulated in the equilibrium theory.

    (iv)
    The “Past Hypothesis”, which states that the universe began in a state of very low entropy plays absolutely no role for any rigorous explanation of the second law of thermodynamics.

    E.g. when deriving Boltzmann original H-theorem or any other modern generalized H-theorem we do absolutely no hypothesis about the initial state being a very low entropy state. In fact all the H-theorems apply to initial states with very high entropy as well. It is the irreversibility contained in the H-theorem which prevent that any system in an initial very high entropy will evolve to a final state with low entropy.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+