Latest Declamations about the Arrow of Time

By Sean Carroll | June 11, 2007 7:56 pm

Here are the slides from the physics colloquium I gave at UC Santa Cruz last week, entitled “Why is the Past Different from the Future? The Origin of the Universe and the Arrow of Time.” (Also in pdf.)

Time Colloquium

The real reason I’m sharing this with you is because this talk provoked one of the best responses I’ve ever received, which the provokee felt moved to share with me:

Finally, the magnitude of the entropy of the universe as a function of time is a very interesting problem for cosmology, but to suggest that a law of physics depends on it is sheer nonsense. Carroll’s statement that the second law owes its existence to cosmology is one of the dummest [sic] remarks I heard in any of our physics colloquia, apart from [redacted]’s earlier remarks about consciousness in quantum mechanics. I am astounded that physicists in the audience always listen politely to such nonsense. Afterwards, I had dinner with some graduate students who readily understood my objections, but Carroll remained adamant.

My powers of persuasion are apparently not always fully efficacious.

Also, that marvelous illustration of entropy in the bottom right of the above slide? Alan Guth’s office.

Update: Originally added as a comment, but I’m moving it up here–

The point of the “objection” is extremely simple, as is the reason why it is irrelevant. Suppose we had a thermodynamic system, described by certain macroscopic variables, not quite in equilibrium. Suppose further that we chose a random microstate compatible with the macroscopic variables (as you do, for example, in a numerical simulation). Then, following the evolution of that microstate into the future, it is overwhelmingly likely that the entropy will increase. Voila, we have “derived” the Second Law.

However, it is also overwhelmingly likely that evolving that microstate into the past will lead to an increase in entropy. Which is not true of the universe in which we live. So the above exercise, while it gets the right answer for the future, is not actually “right,” if what we care about is describing the real world. Which I do. If we want to understand the distribution function on microstates that is actually true, we need to impose a low-entropy condition in the past; there is no way to get it from purely time-symmetric assumptions.

Boltzmann’s H-theorem, while interesting and important, is even worse. It makes an assumption that is not true (molecular chaos) to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).

The nice thing about stat mech is that almost any distribution function will work to derive the Second Law, as long as you don’t put some constraints on the future state. That’s why textbook stat mech does a perfectly good job without talking about the Big Bang. But if you want to describe why the Second Law actually works in the real world in which we actually live, cosmology inevitably comes into play.

CATEGORIZED UNDER: Science, Time
  • http://blogs.discovermagazine.com/cosmicvariance/mark/ Mark

    Ah, I know that office well. What is remarkable is that, although one would think, every time one enters the office, that it must be in the state of maximum entropy, the next time one is in there, one finds that the second law has indeed led to some evolution.

  • http://www.sunclipse.org Blake Stacey

    How I wish I could get that kind of audience response! (-:

    I attended a talk a few weeks ago by Jack Cowan, who was presenting to the MIT neuroscience crowd some research he and his students had done on a “toy model” of the human cortex. (Announcement and abstract of talk are available here.) Some of his calculational techniques derived from quantum field theory, including something about “Reggeonic fields” I didn’t understand — apparently those have something to do with the states you get from quantizing the bosonic string, which have angular momentum proportional to the square of the energy.

    At the beginning of his talk, Prof. Cowan explicitly distanced himself from Penrose, saying that the brain is just too warm for quantum effects to apply. Philosophically, I suppose, his work is analogous to inventing a new and better technique of long division while working on quantum-physics problems, and then applying that long-division technique to other situations.

    [To the Esteemed Blog Host: feel free to redact any names mentioned in this post, should discretion so mandate.]

  • Scott O

    Do tell … what WERE the objections of your provokee to your “dummest remarks”?

  • http://tyrannogenius.blogspot.com Neil B.

    Of course, if you think that the quantum wave function is somehow “real” then time is unidirectional: the wave function from an emitter is an expanding spherical shell (or some such, which variations are not relevant.) Then, when it “collapses” the bubble pops in effect and the particle is re-localized. Running that whole picture backwards does not look the same, no matter how reversible the emission and reception events themselves are. Well, any implications to that? Even if you don’t accept much literalism to the WF itself, it is supposed to represent what could really happen there…

    BTW, about thermodynamics: Why the heck should unrelated (?) laws like those of optics be forced by nature to serve the interest of preserving a law (Thermo 2) based on jiggling atoms? I mean, for example that you can’t (?) design a mirror or lens system that would focus an image to such short f-ratio that the surface brightness would exceed that of the source, meaning you could heat the target to higher temperature than the source. I had fun arguing about this in Usenet, but no one really answered that question – it was just more of the same unimaginative circular argument that what’s true is true and that’s that.

    (PS – With all the strident arguments about whether God is “real” and such lately – of which I am proud to have disseminated many of stupefying grandeur – I hate to break it to you folks just how tricky and fuzzy a “predicate” or whatever that adjective (?!) is anyway… Please look up “modal realism” in Wikipedia and read it while having some stiff drinks or etc.)

    tyrannogenius

  • http://kea-monad.blogspot.com Kea

    Good to see you doing something interesting.

  • Pingback: Quest » Blog Archive » Latest Declamations about the Arrow of Time()

  • http://theeternaluniverse.blogspot.com Joseph Smidt

    That’s astonishing someone actually made such a blunt remark. Were you completely taken back?

    By the way, thanks for the link to your slides. *I* find these ideas fascinating. I hope do do some work myself on these crazy ideas that cosmologists claim.

  • http://www.densitymatrix.com Carl Brannen

    “Afterwards, I had dinner with some graduate students who readily understood my objections, but Carroll remained adamant.”

    LOL! Sane graduate students know enough about political science to avoid telling the king that he is nekkid. That said, I find Sean’s argument weak. I wonder what the quoted objecter wrote before “Finally …”

  • http://CapitalistImperialistPig.blogspot.com CapitalistImperialistPig

    I can think of a couple of reasons that Sean’s critic might not be impressed, for example the fact that Sean’s origins for low entropy universes seem rather speculative and perhaps also the fact that Sean isn’t really deriving the second law from cosmology – he’s just trying to explain why entropy might have been lower in the past.

    What I wonder is whether the fact that the fine-grained entropy doesn’t change is relevant or interesting.

  • http://groupaction.blogspot.com Nick Ernst

    Sean, thank you for your talk! You gave structure to an important question that’s been on everyone’s back shelf. As for that particular professor, well, he does that to nearly everyone – I think everyone else enjoyed the talk.

  • http://www.thechocolatefish.blogspot.com Yvette

    I’ll second Scott O’s comment: what were the objections? Just curious.

    Personally though, I think the best explanation for entropy was one I got from my chemistry professor freshman year, which went something like this: “Think of how whenever you throw a party you make sure everything is nice and neat, but when your friends come over the place gets trashed. It’s not really anyone’s fault, just someone drops a few chips, someone else steps on them, someone else spills a drink, and that’s the way of things.

    “Now imagine that instead for a party you started with your apartment completely trashed, and everyone instead decided to come party by cleaning it up, leaving it nicer than when they came. This just doesn’t happen, it’s against the laws of nature!

    Ok, not very scientific, but I promise you there is no better way to explain it to college students! ;)

  • Jeff L. Jones

    “Afterwards, I had dinner with some graduate students who readily understood my objections, but Carroll remained adamant.”

    lol, I wonder which graduate student(s) he could be referring to. As a graduate student who was at that table, I can speak for myself and at least 3 others who were there when I say that our reaction to said provoker’s “objections” was quite the opposite. An obvious misconception on his part, which Sean very patiently took the time to explain. Incidentally, we often refer to said provoker as the “Santa Cruz heckler”.

    If there *were* any graduate students there who agreed with said provoker, they certainly weren’t cosmology or high energy theory students. I find it telling that he doesn’t mention how many professors there agreed with his objections (zero, by my recollection). :-)

  • Jeff L. Jones

    er… provokee, that is. I suppose my own perception of who was doing most of the provoking is backwards from the convention etablished in this post.

  • lackey

    This was one of the more enjoyable colloquium talks I’ve heard recently.

    I stood around for quite a while after Sean’s talk, listening to his and his Critic’s back-and-forth until Sean finally had to leave for dinner, and then I stood around some more. My impression is that Critic’s overall objections are closest to CapitalistImperialistPig‘s suggestion that

    Sean isn’t really deriving the second law from cosmology – he’s just trying to explain why entropy might have been lower in the past.

    One particular objection (and I hope Sean will redact this if he’d rather it not be posted): Critic argued repeatedly that cosmology is irrelevant because in cosmological N-body simulations, if one runs the simulation under conditions of time reversal — that is, change t for -t — one still obtains the same results, always. Changing t for -t does not have an effect on the evolution seen in the simulation. (This of course assumes that the major physical prescriptions used in simulations are time reversal invariant.)

    [This sounds superficially true, except that

    1. Our numerical N-body simulations are not equivalent to universes-in-boxes, although they try. They necessarily suffer from information loss relative to the best cosmological “simulation” we have, which is the real universe. They don’t have perfect resolution; and instead of doing (and because we can’t do) discrete particle-by-particle assessment of Everything, some gross analytical recipes meant to reproduce observed phenomena, with their own unconsidered assumptions and consequences, may be used. All of this is a result of computational limitations, human limitations on current understanding (including but not limited to lack of a theory of quantum gravity), and practical limitations (viz., you can’t run a cosmological simulation in less than a Hubble time unless you are willing to sacrifice some information).

    2. Even supposing you were able to set up a very large quantity of test universes that did not suffer from some of these limitations, ignoring for a moment the definition that the future is the direction in which entropy increases, and supposing you started your simulations under conditions of a uniform distribution over all microstates compatible with the initial macrostate, naively you would in fact expect some of these universes, startlingly, to experience a decrease in entropy as t increases.]

    By way of analogy, Critic argued that if you happened upon a glass of cold water in which ice had melted, and attempted to simulate it under time reversal, you would never derive the prior condition of ice cubes in the glass. [“Rarely” instead of “never” is perhaps a better way to put it, if you have uniform probability of compatible present microstates.] Instead you would derive a glass of progressively warmer water up to equilibrium with the surrounding air. A Santa Cruz cosmologist (whom I hope will correct my record, if he’s reading) pointed out that this derivation would nevertheless be wrong, because ice cubes had indeed been in the glass. You must insist somehow in your simulations that the starting microstates are compatible with the current water in the glass and with some entropy that was lower in the past.

    Taking this insistence to an extreme conclusion requires that the universe in the far past also had significantly lower entropy, which I assume is the crux of Sean’s argument that “the growth of entropy is a fundamentally cosmological fact.” (Whether or not it requires the speculative ideas that Sean discussed is a separate argument, obviously.)

    I would have liked to have been at that dinner, incidentally, if it was anything like the post-colloquium kerfuffle.

  • http://vacua.blgospot.com Jim Harrison

    I have never figured out how to convincingly explain to anybody why, a priori, you’d expect entropy to increase in both temporal directions and our usual understanding of thermo depends on the cosmological fact that entropy happened to be a lot greater in the past. Your slide show did a very nice job of explaining things. I’ll try it out on some of the nonbelievers.

  • Marty Tysanner

    I guess I was one of the grad students who “Provokee” (who from now on I’ll just call P) referred to as having understood his objections, given that we talked about them for a little while during the dinner. (Jeff, you were at the wrong end of the table!) The crux of his objection as I understood it was that the Second Law makes no reference to cosmology, was deduced outside cosmology, and is true independent of cosmology. This is a fairly restricted objection. P was not objecting to Sean’s interest in understanding the arrow of time, and wasn’t expressing a opinion about that; he saw the arrow of time as a related but nonetheless separate issue which, in his view, should not be conflated with the Second Law itself. P’s interpretation of some of what Sean said was that Sean was claiming that cosmology is necessary to understand the Second Law (as opposed to the arrow of time), and he felt that if such a claim were true then it would have significant ramifications to the foundations of statistical mechanics and thermodynamics.

    I felt I understood P’s point of view and agreed with it as far as it went, but only in the very restricted sense that I think he meant. In a sense, it seemed that P and Sean were not fundamentally disagreeing so much as talking about subtly different issues. But then, I could be wrong. (In defense of P, he is a bright physicist who has made significant contributions that probably most people would be pleased to have made. And he isn’t shy about stating his opinion as others have noted…)

    Anyway, Sean, it was a very interesting talk. Thanks!

  • Pingback: Blogosphere: This Week in Astronomy at Orbiting Frog()

  • Alex Nichols

    I liked the definition of Time that Dr Who gave in the latest episode.

    “People think of time as a linear progression of cause and effect, but in fact it’s knotted, wibbly-wobbly and well, …..timey-wimey”

    I also liked the concept of the “quantum-locked” Weeping Angels, that had “been around since the dawn of the universe”.
    They only move when not being observed , but turn into statues when you look at them.
    “The only sociopaths to kill you nicely, by sending you into the past and feeding off the potential energy of your future.”

    Great stuff!

    “Blink”:
    http://www.bbc.co.uk/doctorwho/

  • http://quantumfieldtheory.org nigel

    Well done, that’s a fairly good presentation! The 2nd law of thermodynamics is linked to cosmology, but you miss out just a few tiny considerations. In a static universe (such as Einstein’s model in 1917 or Hoyles in 1950), entropy (disorder) was supposed to always increase because the temperature becomes more and more uniform.

    There are several issues with this idea. Firstly, the lab experiments on chemical reactions which showed that entropy rises (and the theoretical calculations backing them up) applied to instances where the gravitational attraction between the reacting molecules was small, and where the molecules weren’t receding from one another at immense speeds.

    So the 2nd law is hardly a good model for the macroscopic universe! Three flaws:

    1) Entropy increases are limited due to redshift in an expanding universe: thermal equilibrium (heat death through maximum entropy) between receding bits of well-separated matter would require the uniform exchange of thermal radiation, but in an expanding universe such radiation is always received in a redshifted (lower-energy) state than that emitted. Hence, all matter emits into the outer space more energy than it receives back. This redshift effect prevents thermal equilibrium from being attained while any energy remains, so outer space remains an effective heat sink. (This redshift of incoming radiation is also the solution to Olber’s paradox, i.e. why the sky looks bright and we aren’t scorched to a cinder by the 3000 K infrared cosmic background radiation flash still reaching us from 13,700 million light years away.)

    2) Entropy (as temperature disorder) actually falls with time in the universe due to gravitation. When the CBR was emitted at 400,000 years after the BB, the temperature was uniform to within one part in 10,000 or whatever. Now, the temperature is grossly non-uniform, hardly an advert for rising entropy. Space is at 2.7 K and the middle of the sun is at 15,000,000 K. So goodbye 2nd ‘law’. The reason is that the ‘theory’ (that temperatures should become more uniform by diffusion of heat from hot to cool areas) behind the 2nd law of thermodynamics neglects gravitation, which is trivial for molecules in a test-tube in the lab, but is big on cosmic scales.

    The universe is 74% hydrogen (by mass), so gravity causes stars to form by inducing nuclear fusion when it pushes this matter together into compressed lumps. The fusion creates the non-uniformity of temperatures because it’s exothermic. This debunks Rudolf Clausius’s definiton of the 2nd law: ‘The entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium.’ The universe is an ‘isolated system’ (there’s no evidence against it being an isolated system), and the universe is ‘not in equilibrium’ because of the redshift phenomena [see poing 1) above]. Hence, however well the 2nd law works in chemistry, it fails spectacularly in cosmology unless redefined.

    3) Eddington pontificated in The Nature of the Physical World (1927): ‘The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.’

    This kind of scientifically-vacuous but authoritative pontification (which was made before Hubble discovered the redshift relationship) should make any genuine scientist deeply skeptical of the ‘law’. Arthur C. Clarke points out that, historically: ‘When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.’

  • Andy McLennan

    Please tell me why this doesn’t work…

    As the presentation makes very clear, all known proofs of the second law of thermodynamics are model-specific, and consequently “local” in at least some sense. Could the problem be that this “law” isn’t actually globally true?

    E.g., suppose the universe is cyclic. As I understand the history of our little piece of the megaverse, one or a few hundred thousand years after the big bang things were in equilibrium, a homogeneous cosmic soup. In a local sense, entropy had already been maximized, and nothin was happenin baby. But the universe expands a little bit, becomes transparent to the dominant wavelengths of the day, and matter starts to CLUMP, opening up vasts new reaches of entropy increases that were hitherto unattainable, resulting in the formation of stars, galaxies, planets, life, GWB, you name it. Before the discovery of dark energy it was easy to imagine the whole thing collapsing on itself, rebigbanging in some (almost certainly) totally disorganized state, and repeating the whole process (hopefully without GWB). This picture seems devoid of mystery except to the extent that physicists cannot reconcile themselves to the SECOND LAW not being a universal immutable global principle.

    But if I shuffle a deck of cards repeatedly, is there really some meaningful sense in which each shuffle is “more random” than its predecessor?

  • http://americansector.wordpress.com Josh

    In defense of the audience at my alma mater, questioning authority is a grand tradition at Santa Cruz. This naturally extends to high degree of skepticism regarding any and all guest lecturers. I’m actually surprised this didn’t lead to a sit in of some sort at the Chancellor’s office. Or maybe some sort of shanty town built in protest on Science Hill. You’re lucky to get out of there alive

  • Matt

    Sean,

    Great slides. A question though: You say on p. 22 that you need an infinite-dimensional Hilbert space, but then later you say that you need empty space with a positive cosmological constant so that the de Sitter thermality can drive the production of baby universes. But what of the notion that any de Sitter space has a finite entropy and therefore corresponds to a finite Hilbert space?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Josh, it was a near thing. But I was aware of the notorious Banana Slug reputation, and when I felt them ready to shackle me near the end of dinner, I knocked over a bottle of wine and sneaked off in the confusion. Entropy can be your friend!

    (Chatter about the objection promoted to an update to the post itself.)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Matt, that’s of course a good question. But note that “finite entropy” does’t automatically imply “finite-dimensional Hilbert space”; that would only follow if you knew you were in equilibrium, which I’m suggesting that de Sitter is not.

  • Matt

    But shouldn’t the question about the dimensionality of the Hilbert space corresponding to de Sitter somehow be an independent statement about de Sitter space in general? That is, if we assume instead a de Sitter space that IS in equilibrium—as would seemingly be the case if we DID assume something that looked approximately like pure de Sitter—we get a finite entropy, and doesn’t that imply in generality that the Hilbert space of de Sitter is finite-dimensional in generality?

  • Chris W.

    Speaking of model-specific derivations of the Second Law, and odd conspiracies of apparently unrelated physical laws to prevent violations of it, I’m surprised no one alluded to the so-called laws of black hole mechanics and their relationship to the laws of thermodynamics. The peculiarly intimate relationship between general relativity and thermodynamics is probably the strongest indication that the laws of thermodynamics are “wired in” to the laws of physics at a very deep level, in a way that transcends both classical and quantum statistical mechanics as presently understood.

    (If this sounds like semi-mystical bullshit I’m sorry…)

  • http://guidetoreality.blogspot.com Steve Esser

    Thanks for posting your slide presentations. They’re great. Best regards,
    – Steve Esser

  • Damien

    The best thing about the picture of Alan’s office is that there actually IS a mexican hat in the clutter!

  • Chris W.

    PS: My comment 26 is related to the second paragraph in comment 4 (from Neil B).

  • http://scipp.ucsc.edu/~aguirre Anthony A.

    Sean,

    After lengthy discussion, the Critic apparently agrees that your basic argument is correct, but only in classical mechanics.

    And what was that drink you claim you spilled? I somehow remember it was something with peaches and cream…

    And how hard can it be to escape a Banana Slug anyway?

    Good to see you.

  • http://www.sunclipse.org Blake Stacey, OM

    Boltzmann’s H-theorem, while interesting and important, is even worse. It makes an assumption that is not true (molecular chaos) to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).

    Memories of kinetic theory come flooding back. . . .

    From one perspective, going from the assumption of molecular chaos to a temporally increasing entropy sounds like an exercise in triviality: you throw away information, you get entropy. As John Baez said back in 1996 (week 80 of This Week’s Finds):

    Why is the future different from the past? This has been vexing people for a long time, and the stakes went up considerably when Boltzmann proved his “H-theorem”, which seems at first to show that the entropy of a gas always increases, despite the time-reversibility of the laws of classical mechanics. However, to prove the H-theorem he needed an assumption, the “assumption of molecular chaos”. It says roughly that the positions and velocities of the molecules in a gas are uncorrelated before they collide. This seems so plausible that one can easily overlook that it has a time-asymmetry built into it — visible in the word “before”. In fact, we aren’t getting something for nothing in the H-theorem; we are making a time-asymmetric assumption in order to conclude that entropy increases with time!

  • http://CapitalistImperialistPig.blogspot.com CapitalistImperialistPig

    Nice description Sean, but one thing still bothers me. Presumeably the current macrostate includes all our books, artifacts, memory traces, and the correlations encoded in our theories and observations. If so, and it’s still true that vast majority of microstates compatible with that macrostate would backward evolve (devolve?) into states of higher entropy, what’s our justification for assuming that we aren’t just a fluctuation?

    I don’t see how you get out of that trap without saying something like “that way madness lies.” If the past wasn’t low entropy, we can’t really do physics – oops – sounds anthropic.

  • http://JACOBRUSSELLSBARKINGDOG.BLOGSPOT.COM Jacob Russell

    Thank you for posting the slides.

    My mind kept drifting to literary theory… how the interpretations of any text expand… analogical entropy.

    Thinking of religious texts–they don’t expand to a vacuum state, to infinite interpretations, and therefore mean nothing… but continuously give birth to “baby universes” .. in essence, interpretations which themselves constitute new cannons of meaning for future generations to interpret.

    Wonderful stuff… thank you for this blog and your valuable time away from your real work.

  • Aaron Bergman

    what’s our justification for assuming that we aren’t just a fluctuation?

    Principles of mediocrity yet again. There should be vastly more brains without all that crap, so the existence of all this extraneous stuff means that we’re special somehow.

  • http://name99.org/blog99 Maynard Handley

    Sean, you make the same damn mistake that almost everyone discussing this subject makes, which renders your discussion incomprehensible.

    The H-Theorem is not about the time evolution of a particular system. It is about the time evolution of a probability distribution function (PDF).
    The time evolution of a PDF is not (directly) subject to Newton’s laws; rather it is a new type of physical entity whose behavior is (like all physics) something we can try to model mathematically, and see if the results work.

    As such, all the standard complaints about the “problems” of the H-theorem eg recurrance of states or Liouville’s theorem are simply irrelevant. Those complaints refer to something different, just like your complaint
    “to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).”
    You are complaining here about the behavior of a particular process drawn from
    a universe of random processes. The PDF describing that universe does indeed have monotonic behavior; the fact that you might occasionally get 100 heads in a row doesn’t change the fact that the probability of a coin toss for heads is one half.

    The fact that both Boltzmann and Ehrenfest were confused on this issue and thus made no sense on the subjec doesn’t change the fact that it is freaking shameful that, 100 years later, most physicists remain just as confused. Talking about PDFs is both more rigorous and a whole lot more comprehensible than this vague nonsense of microstates vs macrostates.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    CIP, Aaron gives the most important answer. If fluctuations appeared with a truly thermal distribution, our whole universe would fluctuate into existence much less frequently than other anthropically-allowed (or whatever-allowed) configurations.

    David Albert has a potentially stronger objection. If we are a fluctuation, it’s overwhelmingly likely that we came from a higher-entropy (near) past. All of our records etc. are just statistical flukes, not reliable traces of past historical events. But that includes all of the data we’ve ever used to invent the laws of physics, including the laws of statistical mechanics we’ve used to derive this conclusion. So the idea that we’re just a fluctuation, while not rule-outable, is cognitively unstable — there can never be any self-consistent justification for it.

  • http://www.allysonbeatrice.com/blog Allyson

    HOLY BUTT. CosmicVariance MUST participate in this.

  • Paul Valletta

    Another thread that compels one to inquire:What comes first,particles to create Entropy, or Entropy that governs particles?

    The really interesting slide’s are 3-4-5

    In a low entropy early Universe, is the Particle_number in phase space, born from Black Hole Production, ie Maximum number of particles contained within a minimum volume of space. When a Black hole is “full”, then and ONLY then do Particles rebound away as particle_creation?

    Interestingly, this early state scenario has a counterpart for the other end of the Universe Entropic cycle, ie minimum particle number contained within a maximum volume of space?

    In some inflation models, Andre Linde for instance, there is a prediction/need for a “slow-roll2 out of inflation period, this needs a small particle mumber (less particles here means less impacts, thus less entropy), so the big bang was followed by a period of little thuds!

    The end of Universe phase really compels one to ask of E=mc2, is the ENERGY a particle energy or a Vacuum ENERGY.

    Sean clearly makes a good case for the Varying Laws Of Physics, varying of course at either scales, end and the beginning, a general case can be made that the physical laws took a vast amount of time to reach equilibrium (present time constant), and will gradually move to an increased change rate, as the expansion increases, the laws of physics would have to change at a spcific increasing rate in order to maintain “order”.

    We note, that as the total particle number in the current Universe reaches its pinnacle, then Gravity rules as far as the eye’s or telescopes can see, but this wont last forever!

    As far as the Universe goes, change is always what lays ahead, the past does not and CANNOT change, the Arrow of (present) Time ensure’s the arrow points “one-way”, towards the future.

    This is not to say that at the Universe’s end, there would not be moments that become intertwined, or entangled, to the future non existing observers, it would not really matter, what really matters is that the process continue’s.

    The only concevable way that the Laws of Physics have not changed, is there to actually be no present time change occuring, for there to be no change occuring now, means no Entropy, and consequently no particle collisions anywhere in the Universe.

  • Thomas Larsson

    Three questions:

    1. Which test, experimental or observational, which could distinguish this hypothesis (arrow of time has cosmological origin) from its opposite?

    2. Does cosmology also have anything to say about the other problems of time (time direction singled out non-covariantly, Hamiltonian is a constraint, etc.)?

    3. According to Pauli’s theorem, time is a c-number rather than an observable in QM (although I expect the opposite to be true in the UV completion of QFT). Do you make any assumption which contradicts this, e.g. that time is measured by clocks?

  • http://math.ucr.edu/home/baez/ John Baez

    So, Sean wrote:

    Boltzmann’s H-theorem, while interesting and important, is even worse. It makes an assumption that is not true (molecular chaos) to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).

    whereas when I last thought about this theorem, I claimed otherwise (see Blake Stacey’s comment).

    So, is the assumption behind Boltzmann’s H-theorem time-symmetric or not? And what about the theorem’s conclusion: time-symmetric or not? Is the conclusion that entropy increases both in the future and the past… or just towards the future?

    I’m pretty darn sure the assumptions and conclusions are time-asymmetric.

    Boltzmann actually called his assumption the “Stosszahlansatz”, or “collision number assumption”.

    So: what’s the Stosszahlansatz?

    It goes like this.

    Suppose we have a homogeneous gas of particles and the density of
    them with momentum p is f(p). Consider only 2-particle interactions
    and let w(p1, p2; p1′, p2′) be the transition rate at which pairs of
    particles with momenta p1, p2 bounce off each other and become
    pairs with momenta p1′, p2′. To keep things simple let me assume
    symmetry of the basic laws under time and space reversal, which
    gives:

    w(p1, p2; p1′, p2′) = w(p1′, p2′; p1, p2).

    In this case the Stosszahlansatz says:

    df(p1)/dt =

    integral w(p1, p2; p1′, p2′) [f(p1′)f(p2′) – f(p1)f(p2)] dp2 dp1′ dp2′

    This is very sensible-looking if you think about it. Using this,
    Boltzmann proves the H-theorem. Namely, the derivative of the
    following function is less than or equal to zero:

    H(t) = integral f(p) ln f(p) dp

    This is basically minus the entropy. So, entropy increases, given the Stosszahlansatz!

    The proof is an easy calculation, and you can find it in section 3.1
    Zeh’s The Physical Basis of the Direction of Time (a good book).

    Now: since the output of the H-theorem is time-asymmetric, and all
    the inputs are time-symmetric except the Stosszahlansatz, we should
    immediately suspect that the Stosszahlansatz is time-asymmetric.

    And it is!

  • Jim Graber

    “In this case the Stosszahlansatz says:

    df(p1)/dt =

    integral w(p1, p2; p1′, p2′) [f(p1′)f(p2′) – f(p1)f(p2)] dp2 dp1′ dp2′

    “Now: since the output of the H-theorem is time-asymmetric, and all
    the inputs are time-symmetric except the Stosszahlansatz, we should
    immediately suspect that the Stosszahlansatz is time-asymmetric.

    And it is!”

    The above Stosszahlansatz doesn’t look obviously time-asymmetric to me. In fact, changing t to -t and interchanging primed and unprimed p variables seems to give a symmetrical result. What am I missing?

  • http://evolutionarydesign.blogspot.com/ island

    Just like I tell neodarwinians who try to use the multiverse to lose the implications of the observed universe:

    Prove that your multiverse is necessary to the ToE or a complete proven theory of quantum gavity. Otherwise, (and the same goes for inflationary theory)… you’re rambling bla-bla-bla woulda’ coulda’ shoulda’ whatif maybe inferred implied assumed…. crap, that avoids the most obvious solution to the problem.

    If the universe has a past boundary (as in the conventional Big Bang), conditions there are finely tuned for reasons that remain utterly mysterious.

    This one always cracks me up, becuase the people who say this are the same ones who refuse to look for the most apparent resolution to the problem, because, and in spite of the evidence, they still don’t believe that a strong anthropic constraint on the forces is possible. My favorite analogy being the refusal to believe that the guy that’s standing over the dead body while holding a smoking couldn’t have done it, the damning evidence supercedes to the willfully ignorant belief, until proven otherwise, per the scientific method… children.

    The problems, entropy, fine-tuning… (you name it!) are very simply resolved without multiverses nor inflationary bandaids in this old post to the moderated research group, and go figure… you don’t need to know anything more than general relativity to understand it, although you can easily see the seeds of a valid theory of quantum gravity contained within the physics.

    But the strong anthropic constraint means that I might as well be giving physics to the brick wall across the street. I’ll bet that even hard-core loopers, like Baez, would accept the multiverse before they would admit that the strong anthropic solution already exists.

    The Second Law of Thermodynamics says “god” doesn’t throw dice…

    Our Darwinian Universe

    Continue to willfully ignore the strong anthropic constraint at your own peril…

    …conditions there are finely tuned for reasons that remain utterly mysterious

    L. O. L.

  • Aaron Sheldon

    Kind of missing the point trying to connect the Second Law of Thermodynamics to the arrow of time? After all the Second Law of Thermodynamics is just a restatement of the central limit theorem, with the words “compatible macrostate observable” having the precise mathematical meaning of “a random variable with well defined first and second moments”

    The real question is why is there a well ordering on observations, why can’t we make observations in any order we want?

    Maybe translations through space-time can’t be represented by an algebra of unitary transformation on a Hilbert Space? But rather by operators with either a non-trivial kernal or non-trivial image sub-space.

  • Paul Stankus

    Dear Sean —

    Thanks for sharing with us your work on this interesting topic.
    I would like to dive into the discussion of the main question,
    but being only a Bear of Little Brain I have to ask some lower
    level questions first.

    You start from the basic observation that if entropy is always
    increasing then it must have been lower in the past; you then
    infer that some early/initial state must therefore have been a
    very special, low-entropy state. (Sir Roger goes through this
    same logic in The Emperor’s New Mind, and again in his
    latest road show.) The question is, do you have to picture that
    early low-entropy state as being out of equlibrium?

    The standard story of the thermal early Universe is that
    (1) essentially all the entropy is in relativistic gases, and
    (2) these gases are in thermal equilibrium, and so are already
    at maximum entropy. So in what sense is the early thermal
    phase a “special” state? I don’t see that it can be.

    The state of the Universe once it crosses to being
    matter-dominated _is_ a special state since it is very smooth
    and once gravity is allowed to act you can pick up a lot of
    entropy when matter begins to clump together. Eventually
    we reach maximum entropy, and hence equilibrium, again
    when all baryons and dark matter are squozen into black holes*,
    which is clearly a great increase in entropy over the smoothly
    distributed state.

    The question then becomes, how are we to understand the
    great jump in “specialness” at the radiation-matter transition?
    The Universe goes from maximum possible entropy, ie not
    special at all, to being welll below maximum entropy, ie
    very special. The entropy per co-moving volume didn’t change
    at this transition; it’s more like the ceiling on entropy suddenly
    moved greatly upward.
    To put it differently, it’s like walking along and suddenly having
    the floor fall out from under you: your position in the potential
    is the same, but somehow you suddenly have a lot more
    low-entropy potential energy at your disposal (if you can
    figure out, very quickly, how to take advantage of it).

    So if what you want to track is “specialness” rather than
    entropy per se, you have to somehow decide whether/why
    the radiation-matter transition — which seems pretty
    non-remarkable microscopically — is such a landmark event.

    Thanks for whatever insight
    you can offer. Regards,

    Paul Stankus

    * Even once all matter and dark matter are in black holes
    we still haven’t reached equilibrium, since we can always
    pick up more entropy by letting the black holes merge,
    and merge again, and so on until they can’t collide any
    more. So in a Universe which is expanding arbitrarily
    slowly there seems to be no upper limit on entropy per
    volume and so no ultimate equilibrium.

  • http://www.pipeline.com/~lenornst/index.html Len Ornstein

    Historically, the second law concerns ‘the behavior of particles in a ‘gas”.
    Information theoretic formulations now focus on probabilities. In the latter context, closed systems tend to move from less probable to more probable configurations.

    This contrasts with the classic formulation, that usually talks about moving from ‘ordered’ states towards ‘less ordered states’.

    A phase change, on cooling, involves ‘apparent’ increase in order; but is a transition to a more probable state. Likewise for evaporation of back holes, or sublimation of solids, crystalline or amorphous. And in a universe without dark energy, a contracting, heating crunch represents the more likely direction.

    So this semantic ‘trick’ solves the problem of the direction of time.

    HTH

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    John’s comment above about the Stosszahlanzatz is basically right, but there is a little bit more behind the story. (And it is all there in Zeh’s book, if you bother to unpack it.)

    Boltzmann actually did two innocuous-sounding things that enabled him to miraculously derive a time-asymmetric conclusion from time-symmetric microscopic laws. First, he worked with distribution functions defined in the single-particle phase space. That is, if you have two particles, you can either keep track of them as one point in a two-particle phase space (separate dimensions for the position and momentum of each particle) or as two points in the single-particle phase space. Both are completely equivalent. But if you go to a distribution function in that single-particle phase space, f(q,p), then you have thrown away information — implicitly, you’ve coarse-grained. In particular, you can’t keep track of the correlations between different particle momenta. There’s no way, for example, to encode “the momentum of particle two is opposite to that of particle one” in such a distribution function. (You can keep track of it with a distribution function on the full multi-particle phase space, which is what Gibbs ultimately used. But there entropy is strictly conserved, unless you coarse-grain by hand.)

    Because you’ve thrown away information, there is no autonomous dynamics for the distribution function. That is, given f(p,q), you can’t just derive an equation for its time derivative from the Hamiltonian equations of motion. You need to make another assumption, which for Boltzmann was the Stosszahlansatz referred to above. You can justify it by saying that “at the initial moment, there truly are no momentum correlations” (plus some truly innocent technical assumptions, like neglecting collisions with more than two particles). But of course the real Hamiltonian dynamics then instantly creates momentum correlations. So that innocent-sounding assumption is equivalent to “there are no momentum correlations before the collisions (even though there will be afterwards).” Which begins to sound a bit explicitly time-asymmetric.

    The way I presented the story in my talk was to strictly impose molecular chaos (no momentum correlations) at one moment in time. That’s really breaking time-translation invariance, not time-reversal. From that you could straightforwardly derive that entropy should increase to the past and the future, given the real Hamiltonian dynamics. What the real Boltzmann equation does is effectively to assume molecular chaos, chug forward one timestep, and then re-assume molecular chaos. It’s equivalent to a dynamical coarse-graning, because the distribution function on the single-particle phase space can’t carry along all the fine-grained information.

  • http://www.sunclipse.org Blake Stacey, OM

    The way I was taught (see lectures 8 and 9 in Mehran Kardar’s notes) the irreversibility enters when you approximate the two-particle distribution function f(q1, q2) with the product of the one-particle distribution functions, f(q1)f(q2). Making this approximation to terminate the BBGKY hierarchy yields the Boltzmann equation, which as Kardar proves in lecture 9 is not time-reversal symmetric.

    I came across a 1974 paper by Blatt and Opie in which the Boltzmann equation is derived from Liouville’s Theorem by mandating that the approximations involved preserve all one-particle expectation values, while neglecting two-particle and higher. It’s the same idea, I believe: throwing away information (about correlations) means that your distribution function will increase in entropy.

  • http://CapitalistImperialistPig.blogspot.com CapitalistImperialistPig

    John Baez’s comment (#40) and the responses by Sean and Blake (#s 46 & 47) seem to show that the entropy increase is inextricably linked to information loss. Could it be that the entropy increase is simply caused by information leaking away, perhaps to points beyond a horizon, or lost to interactions with objects outside the domain in question?

    To get non-conserved entropy, we must coarse grain, but coarse graining is an intellectual operation, not a physical one. Is information leakage the physical counterpart?

  • http://math.ucr.edu/home/baez/ John Baez

    Thanks, Sean! You explained something that I should have remembered from ancient conversations on sci.physics.research: the “assumption of molecular chaos” and the “Stosszahlansatz” are quite different! The former is time-reversal symmetric and (in most situations) unrealistic; since it predicts that entropy increases both in the future and the best, starting from a given moment of low entropy. The latter breaks time-reversal symmetry and is (seemingly often) more realistic, since it predicts that entropy increases only in the future.

    Got it.

    I’m really fascinated by the arrow of time and its relation to gravity and cosmology. In the future I plan to study this more. In the past, too!

  • http://math.ucr.edu/home/baez/ John Baez

    Paul Stankus writes:

    The standard story of the thermal early Universe is that
    (1) essentially all the entropy is in relativistic gases, and
    (2) these gases are in thermal equilibrium, and so are already
    at maximum entropy. So in what sense is the early thermal
    phase a “special” state? I don’t see that it can be.

    Roger Penrose discusses this issue at length in The Emperor’s New Mind. I don’t agree with everything in this book – indeed, some parts are famously controversial, and probably wrong. But, I think his treatment of this issue is quite good.

    Briefly: while most of the entropy of the early universe is in radiation and hot gases, and this stuff is close to thermal equilibrium if one neglects gravity, the early universe is very far from equilibrium if one takes gravity into account! Gravitating systems increase their entropy by clumping up and getting hotter! As the universe ages, this is what happens.

    So, the early state of the universe is very special – because it’s smeared out and homogeneous, not lumpy the way gravitating systems become as their entropy starts increasing.

    If this seems paradoxical, well, it’s because gravity is funny!

    Mind you, I don’t think we understand this stuff as well as we should. The whole concept of “thermal equilibrium” becomes rather shaky when one tries to take gravity into account. This is true already at the Newtonian level, but even more so when general relativity is thrown into the mix, and the concept of energy, and thus free energy, becomes much trickier.

  • http://math.ucr.edu/home/baez/ John Baez

    Jim Graber wrote:

    The above Stosszahlansatz doesn’t look obviously time-asymmetric to me. In fact, changing t to -t and interchanging primed and unprimed p variables seems to give a symmetrical result. What am I missing?

    Hey, it’s a fun puzzle! I don’t want to spoil it.

  • http://www.raidlab.com 高考落板生

    why my replies were not showed

  • http://www.raidlab.com raid 数据恢复

    If this seems paradoxical, well, it’s because gravity is funny

  • http://arunsmusings.blogspot.com Arun

    Confused, isn’t any particular microstate in fixed (zero) entropy? The only way to get entropy to increase when evolving over time is to coarse-grain. I know the point you’re trying to explain, but I don’t agree with the explanation.

  • http://egregium.wordpress.com/ Christine

    I’m really fascinated by the arrow of time and its relation to gravity and cosmology. In the future I plan to study this more. In the past, too!

    So do I… :)

    Gravity dominates in large scales. What is the correct statistical mechanics of gravitating systems? A tricky and fascinating issue. For those interested, see section 4.2 of my paper, where a brief review is offered.

    http://arxiv.org/abs/astro-ph/0604544

    Best,
    Christine

  • Paul Stankus

    Dear John Baez —

    Thanks very much! for your attentive reply (#50) to my posted question (#44) about entropy in the early thermal universe. However, I can’t see how what you’ve written can be correct, and so I must still be missing something. You write:

    “Briefly: while most of the entropy of the early universe is in radiation and hot gases, and this stuff is close to thermal equilibrium if one neglects gravity, the early universe is very far from equilibrium if one takes gravity into account! Gravitating systems increase their entropy by clumping up and getting hotter! As the universe ages, this is what happens.”

    Of course, gravity was operating during the hot early universe; so if entropy can be increased in the presence of gravity by clumping, then why didn’t the hot (ie ultrarelativistic) early universe clump spontaneously? You might say that the thermal phase was so short that not much clumping could have happened, but I don’t think that gets at the essence of the question.

    Can radiation clump? I am but a Bear of Little Brain, but my intuition certainlys says no. Without the complication of an expanding Universe, suppose we just look at a photon gas in a box and let it evolve for an arbitrarily long time. Will it ever clump gravitationally? Photons can have their trajectories bent graviationally, but they are _never_ bound by any gravitational concentration (short of a black hole), and so they can’t “collect” in an over-density the way massive particles can. There are thermal fluctations in density in the photon gas, and concievably these could be amplified by gravity, but only — I would guess — to a very small degree. So I would expect that the long-term/final/equilibrium/maximum-entropy state of a photon gas is always quite close to being spread uniformly. What am I missing here?

    We can ask the question the other way: if we started with a clumpy photon gas would it, in the presence of gravity, spontaneously clump further or un-clump? If we just imagine the border between regions of different density, then since the photons are all unbound I would expect more to cross from high to low than vice versa, and so I would expect the spontaneous trend would be toward unclumping. And since anything that happens spontaneously should correspond to an increase in entropy, it seems to me that the photon gas should be at maximal entropy with minimal clumping. If I’m off the beam here, can you tell me how?

    Thanks; regards,

    Paul Stankus

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Paul– You’re largely on the right track. The clumping/non-clumping business does not actually have a cut-and-dried relationship to entropy. Some things (like matter) will clump in the right circumstances; others (like radiation) will not. It’s true that entropy increases as inhomogeneities grow in a matter-dominated universe, but the the full story is richer than that. (And we don’t understand it as yet.)

    The secret is that, in addition to clumpy vs. uniform, general relativity also allows for the overall density of particles to change as space expands or contracts. (Something that can’t happen with a non-gravitational box of gas.) The very early universe is extremely low entropy when gravity is taken into account. You know that even if you don’t have a rigorous definition of the entropy, because you know that the early universe quickly evolves into a very diiferent-looking state; truly high-entropy states are static. The entropy goes up as the universe expands, and the universe becomes clumpier. But the entropy continues to go up as the universe continues to expand, and eventually the tendency towards clumpiness reverses. Black holes, for example, eventually evaporate into the increasingly empty regions around them. The truly high-entropy configuration is just empty space, which is pretty stable.

  • Alex Nichols

    57: ‘The truly high-entropy configuration is just empty space, which is pretty stable.’

    Which sort of begs the question, what is the mechanism by which quantum fluctuations in empty space can translate to the classical level and lead to a new space-time topology, pinched off from the the proposed future of our universe?

  • http://evolutionarydesign.blogspot.com/ island

    But if the energy density become low enough, and the universe falls into “non-gravitational” state of uniform energy distribution, thermal equilibrium, and maximal entropy, then uncertainty is supposed to intervene, enabling the universe to regain low entropy due to the spontaneous formation of ordered matter that will inevitably occur as quantum mechanics lowers the random element in the behavior of matter.

    As the age of the universe approaches infinity the probability increases that a cherry 1968 Pontiac Firebird, will pop spontaneously into existence, and make mine a convertable, please… ;)

    Allegedly, all non paradoxical states will be attained as the age of the universe approaches infinity… yeah, right, as another flawed theory gets extended to reveal its inherent absurdity.

  • Paul Stankus

    Hi Sean —
    Thanks very much for your keen reply (#57) to my questions (#44, #56). I can tell you that I am very much in sympathy with your statement

    “You know that even if you don’t have a rigorous definition of the entropy, because you know that the early universe quickly evolves into a very diiferent-looking state; truly high-entropy states are static.”

    This is a principle I try to take advantage of all the time: maximum entropy is achieved at equilibrium, and equilibrium is when things look macroscopically static. The quick implication is that any kind of spontaneous macro evolution may/probably indicate/s that entropy is increasing. But this is not strictly the case: entropy is increasing only when the evolution is both _spontaneous_ and _irreversible_.

    The evolution of the ultrarelativistic thermal phase of the early universe, however, certainly _is_ reversible. You can see this just by imagining an old-fashioned closed Friedmann universe which contains only a photon gas: it expands, and then recollapses, and the two mirror each other. Entropy per co-moving volume is conserved, and so is the total entropy in the closed universe. It’s like a massive, ideal piston falling into a cylinder filled with zero-viscosity gas, or onto a perfect spring: it goes down and then comes up, thermal equilibrium is always maintained and the motion then reverses itself.

    I don’t have to demonstrate this reversibilty though, since you know we already assume it. The cornerstone assumption in describing the early thermal universe (cf Kolb and Turner) is that entropy per co-moving volume is conserved, and this alone means that the evolution is reversible. So even though I welcome your viewpoint generally, I stand by my original claim that the evolution of the ultrarelativistic thermal phase, including the presence of gravity controlling the overall expansion, is _not_ associated with an increase in entropy even though it is a spontaneous evolution. It’s not until matter domination that the universe evolves into a “different-looking state” and entropy then starts to increase.

    OK, over to you; regards,

    Paul

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Sure, I agree with all that. Of course it’s only approximate; more carefully, there are matter perturbations in the growing mode but not in the shrinking mode, violating reversibility. But it’s a pretty good approximation.

  • Anthony A.

    Sean & Paul,

    The way I look at this is that the expansion opens up the phase space, so that what might have been equilibrium (in terms of the matter/radiation degrees of freedom) departs from equilibrium in the new, larger volume — even while the entropy increases. Thus we get non-equilibrium chemistry, for example.

    I think that if we were discussing *all* degrees of freedom — matter and gravitation — this would somehow correspond to ‘exciting’ new gravitational degrees of freedom that lay ‘dormant’ when the universe was smaller. Now consider the time-reverse. Certainly we can re-shrink the universe, but we can’t put the gravitational degree of freedom ‘back in the bottle’ rather, we find a very inhomogeneous matter and curvature distribution corresponding to higher entropy.

    Now all of these are just words because, as Sean noted, we don’t have any general understanding of how to understand the thermodynamics of gravity (nor I suspect, is it possible to neatly partiction the degrees of freedom between matter and gravity). But that’s the way I, at least, think about it.

  • gstuatGMU

    I keep seeing these bullhorn diagrams, like pages 6 and 7 of Sean’s presentation, and they give me a headache, so I seek clarification. Since the horns curve, we must be thinking of a dynamic situation. Here’s one:

    Consider a hunk of gas at time Tz recently subject to a disturbance, such as sudden removal of a membrane between two boxes of different gases at STP. Knowing classical physics, we compute the molecules’ spatial and momentum distributions at Tz. We run a computer simulation of the motions of N randomly selected molecules over time T, obtaining some resulting microdistribution. We can expect this to well represent the real-life macrostate at Tz + T, and we can therefore expect the entropy computed on the basis of our simulation results to fairly approximate actual entropy at Tz + T.

    Now, however, look at the computer run as a potential postdiction of the distribution at Tz – T. Given time symmetry of the laws of motion, this is a valid postdiction of a microdistribution at Tz – T that could have resulted in the Tz distribution. But, given that things are changing, it is unlikely to represent (be consistent with) the macrostate that really existed at Tz – T, and therefore it won’t give a correct entropy result.

    So why the heck would anybody take seriously such a flawed estimate of past entropy?

    Prediction POV: particles are as likely to follow simulation paths as they are to do anything else.

    Postdiction POV: particles can’t follow simulation paths from Tz because they can’t go backward in time. They are unlikely to follow the time-reversed paths from Tz – T because they are unlikely to start with the simulation result; we gave them a bad initial distribution.

    Maybe our assumptions can’t be time-symmetric because real particles (e.g. molecules) go only one direction in time.

  • KundryVolare

    I’m having a serious problem grasping how we can dispense with nonlocality at will, like it is a sort of chimera you may append to the map as you wish. Having said that, the rest of you nerds have gone calculatingly over my head! I do know that gravity is the residue of time though….and all time IS ONE. I’ve experienced that, so take it from the horses mouth, duration is an illusion!

  • http://tyrannogenius.blogspot.com Neil B.

    I’d like to hear some commentary on the role of QM in reversibility, especially the non-reversible evolution of the wave function between “emission” and “detection.”

  • LambchopofGod

    Sean’s experience is a good example of why 99% of seminars are a waste of time. First, 98% are just boring. Sean’s talk belongs to the next 1% — interesting, and well-presented, but still there was the obligatory idiot in the audience who is not afraid to get up and expose his mind-boggling ignorance for all to see. I don’t think that Sean got anything at all from giving this seminar — the feedback consisted of sheer stupidity. The idiot in the audience is clearly beyond teaching. So what’s the point?

  • Paul Stankus

    Hi Sean —
    Picking up from #60 and #61, it seems that we can agree that for a radiation-only universe the smoothed-out state appears to be macroscopically static, and all its further evolution is basically reversible. So it appears, at first glance, to be an equilibrium/maximum-entropy state.

    The puzzle, then, is why the radiation-only universe has an entropy that is so much enormously lower than a matter-dominated universe of the same overall energy density. For radiation-only, the maximum entropy seems to be just that of the classical photon gas; for matter-dominated, the maximum entropy state is when all the matter is stuffed into black holes (plus a cool residual atmosphere). It’s as though the radiation-only universe would _like_ to cross over into being full of black holes, but there’s no “gateway” for doing so [short of some kind of mondo-high density fluctuation; but I’m not sure that this can even happen with photons since EM fields don’t like to be compressed].

    You can see this “gateway” idea even in simple examples. Take half a solar mass of iron atoms in a one-light-year-square box; what’s its entropy? If we think ahead to the final state of its evolution, it will be a warm white dwarf in absorption/evaporation equilibrium with an atmosphere. With ten solar masses in the same box, though, the final state is very different: a large white dwarf collapses into a neutron star (supernovae, essentially; type Ic?), which accretes until it shrinks into a black hole; the black hole eventually swallows all the matter and comes to equilibrium with a cool gas of radiation and maybe the odd electron or two. These two final states are of _enormously_ different entropy, though their initial conditions are quite similar. The 0.5 M_Sun case is not “through the gate”, but the 10 M_Sun case is; so S(E,V) is highly discontinuous over E.

    So this is my picture of our expanding universe: the initial radiation-dominated phase fills out to the maximum entropy that it can find, namely being spread very evenly. If there were no matter (or, more exactly, no finite chemical potential for a massive species) then this would be the equilibrium state for all time. But once the universe passes to being matter-dominated and matter can start to clump, the “gateway” is opened and entropy can start to increase, irreversibly, up to its “proper” black-hole-dominated value.

    With this in mind, the question that you and Sir Roger start with, namely “Why was the initial/early universe so low in entropy?” can be re-cast to “Why was the initial/early universe not ‘through the gate’?” or, why was it a radiation-dominated phase?

    Does this make sense? Let me know what you think.

    Best regards,

    Paul

    PS Related, but something of a side question: Non-crazy people talk about the possibility of real, live micro-black holes being created — soon! — in LHC collision if the universe really has large extra dimensions. If this were true, then wouldn’t the early universe have been thick with black holes whenever the temperature were above a TeV scale? Does this have any interesting implications for, say, baryogenesis in the thermal phase?

  • Aaron S.

    just a thought here…

    im a layman and an idiot but this thought makes some sense to me.

    perhaps entropy increases toward the future due to the number of possibilites.

    there is only one possibility in the past. whe know what that is the choices are small and randomness has no effect, yet as we look to the future, it is chaotic and unpredictable due to the number of random decisions of individual entities and random motions of matter, that the entropy would surely increase.

    am i making any sense?

  • KundryVolare

    ps: just so you know Aaron, “idiot” comes from the Greek, meaning “private person.”
    Me t00!
    the problem with quantum dynamics/physics is that there aren’t ENOUGH idiots…or maybe it’s just Feynman’s death, but i digress.

  • Pingback: The Lopsided Universe | Cosmic Variance()

  • mo

    Sean wrote:

    The nice thing about stat mech is that almost any distribution function will work to derive the Second Law, as long as you don’t put some constraints on the future state. That’s why textbook stat mech does a perfectly good job without talking about the Big Bang. But if you want to describe why the Second Law actually works in the real world in which we actually live, cosmology inevitably comes into play.

    The explanation of entropy and the Second Law generally accepted by physicists and chemists relies on initial, not boundary conditions, and makes no assumptions about constraints on the future state, contrary to what you say–it is actually the other way around. It may not be self-contained, but it seems to be the most comprehensive one we have. It runs like this:

    1. The dynamical laws are indeed time symmetric, but they are to be supplemented by initial conditions. In real life, the initial conditions never correspond to pure states (e.g. accurately described by a wave function), they always correspond to clusters of states and are described by well-behaved probabilistic measures and density matrices. Statistically all ‘impure’ states that can be prepared result in the needed time arrow. It would be an improbable statistictal fit to prepare a fine-tuned state resulting in the reduction of entropy for a closed system (reverse time arrow: “evolving that microstate into the past will lead to an increase in entropy”).

    So there is no need “to impose a low-entropy condition in the past”; what is imposed is any reasonable initial condition without any reference to its entropy.

    2. By its very nature, entropy can be well defined only locally and can encapsulate only short range interactions (like intermolecular forces). Gravity and all other long range and global (T symmetry violating) interactions cannot be made fit completely into the entropy framework and should be treated at least partially dynamically as external forces, space-time curvature, or something like that.

    Why partially and not fully? This applies primarily to electromagnetic forces because they can induce considerable matter fields of statistical nature that can and should be treated thermodynamically. I can think of no comparable situation (perhaps black holes?) for gravity; however, I can recollect that Zel’dovich and Novikov in one of their books derive internal pressure caused by fluctuations of gravity in self-gravitating systems like dust clouds or large star clusters.

    P.S. I know that Hawking thinks that gravitational entropy is a global quantity and should not be localized, but I am not convinced.

    3. Equilibrium thermodynamics is not a good framework at all for the universe. Think in terms of irreversible thermodynamics (IT), entropy density rather than entropy, the local entropy production, processes, fluxes, etc. IT is a thoroughly local theory that does not need a deus ex machina in the form of cosmology and it is applicable perfectly well to the universe.

    I don’t see any reasons why we have to revise these fundamentals. What I sense is a confusion caused by an illegitimate attempt to extend thermodynamics to the entire universe.

  • Michael Cleveland

    An alternative interpretation of the Arrow:

    The arrow of time is implicit in the Lorentz-Fitzgerald-Einstein (LFE) contractions, and it‘s curious no one seems to have noticed. Time and motion are complements. Any change of position in space corresponds to a change of position in time; the temporal change always in one direction only, from present to subsequent present, regardless of the direction of change in space, its variables subject only to the degree of motion (i.e., speed). The relationship between motion and time is given by the complementary ratios

    (v/c)^2+(t_0/t)^2= 1

    derived from the LFE time dilation equation

    t=t_0/?(1-(v/c)^2 )

    Definitions

    A relative inertial rest frame is implicit in any reference herein to speed, velocity or motion.

    If a traveler takes a long trip, departing and returning at high average relativistic speed, he returns at a time x years in the future relative to his starting frame of reference at a cost of less than x years subjective time. This variable subjective interval can be reasonably described as travel forward in time or accelerated movement through time. Because this subjective cost is variable with spatial velocity, we will use the term “speed through time” for the sake of descriptive simplicity. Also, for simplicity, we will use the term “orientation in time” to describe the source of this variable temporal speed.

    All motion can be described in terms of v/c. Every value for v/c has a corresponding specific orientation in time, and a calculable speed through time (in terms of subjective interval).

    The range of possible values for v/c from an asymptotic approach toward hypothetical absolute zero motion to the asymptotic approach to c encompasses all possible motion. The ratio v/c cannot have a negative value, so there is no possible negative value for t–no possible negative interval (so long as v/c cannot be greater than 1). The arrow of time, therefore, is implicit, and founded in the fundamental geometry of space-time. This is more easily intuitive if we note that the same is true for length and mass: in the corresponding LFE transformations, the requisite positive value for v/c limits the dimensions for mass or length to positive values (which we already accept intuitively). Hence one could say that the arrow of time is implicit in the length of the telephone pole outside your window or the heft of the keys in your pocket.

    Duration

    Because motion and time form an ontological whole, and because all frames of reference are in motion in relation to some other frames, and all motion has a calculable positive value for t, then it is simpler and more reasonable to view duration as natural motion through time, as a complement to spatial motion, than to view past, present, and future as co-existent from some higher dimensional frame (which requires subsequently higher frames to provide for duration of structure–ad infinitum). Three-dimensional objects are just that, but they “endure” because they move with the observer through space and time. They don’t “have” a fourth dimension; they move through the fourth dimension.

    This has interesting implications. There is no past, and there is no pre-existing future, though the future can be treated as a destination, whereas the past, as a corporeal structure at least, does not exist at all. The Universe exists in a constantly moving local “present,” an infinitesimally thin 3-dimensional surface moving with a 4th dimensional vector. Memory, the physical state of the world, and all other record of the past exist as the effect of prior causative motion and interaction, which have both occured at and lead to the edge of the Universe, which is always “now.” Entropy is a marker, not a cause. And good-bye and good riddance to block time.

  • Michael Cleveland

    I shouldn’t try to do these things in the wee hours… Disregard the reference to negative v/c. The idea is correct; the verbiage is wrong. The only way to generate a negative interval is to achieve v/c > 1.

  • http://magicdragon.com Jonathan Vos Post

    I’m having a painful time trying to edit (with more Sean Carroll citations) a refereed conference paper which is more like a meandering blog thread orbull session about “arrow of time” theories by Bell, Gribbin, Laflamme, Hawking, Gold, Gell-Mann, Hartle, Hoyle, Bondi than a real paper. If I put it as is on arXiv, I’ll get some feedback, and at the expense of trashing my own dubious reputation and dragging down that of my full professor co-author. The blog thread in which this comment is embedded is fascinating. I especially like what John Baez and Blake Stacey bring to the table.

    J. Loschmidt, Sitzungsber. Kais. Akad. Wiss. Wien, Math. Naturwiss. Classe 73, 128–142 (1876)

    Wikipedia (on Loschmidt’s paradox) mentions: “One approach to handling Loschmidt’s paradox is the fluctuation theorem, proved by Denis Evans and Debra Searles, which gives a numerical estimate of the probability that a system away from equilibrium will have a certain change in entropy over a certain amount of time. The theorem is proved with the exact time reversible dynamical equations of motion and the Axiom of Causality. The fluctuation theorem is proved utilizing the fact that dynamics is time reversible. Quantitative predictions of this theorem have been confirmed in laboratory experiments at the Australian National University conducted by Edith M. Sevick et al. using optical tweezers apparatus.”

    Time and Classical and Quantum Mechanics and the Arrow of Time

    Philip Vos Fellman, Southern New Hampshire University

    Jonathan Vos Post, [at the time] Woodbury University

    About four years ago, Jonathan Vos Post and I began working on an economics project regarding competitive intelligence and, more broadly information theory. This led us in what we think are interesting and occasionally novel areas of research. One aspect of this research led us in the direction of game theory, particularly the evolving research on the Nash Equilibrium, polytope computation and non-linear and quantum computing architectures.1 Another area where we found a rich body of emerging theory was evolutionary economics, particularly those aspects of the discipline which make use of the NK Boolean rugged fitness landscape as developed by Stuart Kauffman and the application of statistical mechanics to problems of economic theory, particularly clustered volatility as developed by J. Doyne Farmer and his colleagues at the Santa Fe Institute.2 Some of you may have heard us speak on these subjects at the recent International Conference on Complex Systems held in Boston.3 Other dimensions of this work come out of Jonathan’s experience as a professor of physics, astronomy and mathematics and his long association with Richard Feynman.

    In thinking about information theory at the quantum mechanical level, our discussion, largely confined to Jonathan’s back yard, often centers about intriguing but rather abstract conjectures. My personal favorite, an oddball twist on some of the experiments connected to Bell’s theorem, is the question, “is the information contained by a pair of entangled particles conserved if one or both of the particles crosses the event horizon of a black hole?”

    It is in that last context, and in our related speculation about some of the characteristics of what might eventually become part of a quantum mechanical explanation of information theory that we first encountered the extraordinary work of Peter Lynds.4 This work has been reviewed elsewhere, and like all novel ideas, there are people who love it and people who hate it. One of the main purposes in having Peter here is to let this audience get acquainted with his theory first-hand rather than through an interpretation or argument made by someone else. In this regard, I’m not going to be either summarizing his arguments or providing a treatment based upon the close reading of his text. Rather, I will mention some areas of physics where, to borrow a phrase from Conan-Doyle, it may be an error to theorize in advance of the facts. In particular, I should like to bring the discussion to bear upon various arguments concerning “the arrow of time.” In so doing, I will play the skeptic, if not the downright “Devil’s Advocate” (perhaps Maxwell’s Demon’s advocate would be more precise) and simply question why we might not be convinced that there is an “arrow” of time at all.

    Before I do this, however, I am going to cheat a bit and give you Peter’s abstract, in order to differentiate between some of the conventional notions of time as consisting of instants, and Peter’s explanation of time as intervals:5

    Time enters mechanics as a measure of interval, relative to the clock completing the measurement. Conversely, although it is generally not realized, in all cases a time value indicates an interval of time, rather than a precise static instant in time at which the relative position of a body in relative motion or a specific physical magnitude would theoretically be precisely determined. For example, if two separate events are measured to take place at either 1 hour or 10.00 seconds, these two values indicate the events occurred during the time intervals of 1 and 1.99999.hours and 10.00 and 10.0099999.seconds, respectively. If a time measurement is made smaller and more accurate, the value comes closer to an accurate measure of an interval in time and the corresponding parameter and boundary of a specific physical magnitudes potential measurement during that interval, whether it be relative position, momentum, energy or other. Regardless of how small and accurate the value is made however, it cannot indicate a precise static instant in time at which a value would theoretically be precisely determined, because there is not a precise static instant in time underlying a dynamical physical process. If there were, all physical continuity, including motion and variation in all physical magnitudes would not be possible, as they would be frozen static at that precise instant, remaining that way. Subsequently, at no time is the relative position of a body in relative motion or a physical magnitude precisely determined, whether during a measured time interval, however small, or at a precise static instant in time, as at no time is it not constantly changing and undetermined. Thus, it is exactly due to there not being a precise static instant in time underlying a dynamical physical process, and the relative motion of body in relative motion or a physical magnitude not being precisely determined at any time, that motion and variation in physical magnitudes is possible: there is a necessary trade off of all precisely determined physical values at a time, for their continuity through time.

    Having said this, let us now turn to some familiar, and perhaps some not so familiar arguments about “the arrow of time”. The first idea which I’d like to review comes from an article by John Gribbin on time travel, “Quantum time waits for no cosmos”. In his opening statement, Gribbin cites Laflamme, a student of Stephen Hawking:

    The intriguing notion that time might run backwards when the Universe collapses has run into difficulties. Raymond Laflamme, of the Los Alamos National Laboratory in New Mexico, has carried out a new calculation which suggests that the Universe cannot start out uniform, go through a cycle of expansion and collapse, and end up in a uniform state. It could start out disordered, expand, and then collapse back into disorder. But, since the COBE data show that our Universe was born in a smooth and uniform state, this symmetric possibility cannot be applied to the real Universe.

    Gribben summarizes the arrow of time concept by noting:

    Physicists have long puzzled over the fact that two distinct “arrows of time” both point in the same direction. In the everyday world, things wear out—cups fall from tables and break, but broken cups never re- assemble themselves spontaneously. In the expanding Universe at large, the future is the direction of time in which galaxies are further apart.

    Many years ago, Thomas Gold suggested that these two arrows might be linked. That would mean that if and when the expansion of the Universe were to reverse, then the everyday arrow of time would also reverse, with broken cups re-assembling themselves.

    He then goes on to briefly summarize the “big crunch” theory of universal expansion and contraction, citing a version presented by Murray Gell-Mann and James Hartle. It is here that his account gets into trouble, on a theoretical basis because if Peter Lynds is correct in asserting that time does not flow, then the “arrow of time” is a purely subjective quantity flowing from neurobiological activity (as he indeed argues in “Subjective Perception of Time and a Progressive Present Moment: The Neurobiological Key to Unlocking Consciousness”). Empirically, while recent evidence appears to indicate that local inhomogeneities would prevent the temporal symmetry suggested by Gell-Mann and Hartle, there are a number of deeper issues whose empirical resolution is simply far beyond our present grasp.

    One area which raises some fairly troubling questions for any theory of universal temporal symmetry is the question of whether in the universe we inhabit, presently known physical constants have always had the values which we recognize today. A particularly intriguing argument has been recently advanced in this regard, claiming that the speed of light must have been greater during the very early history of the universe. Unfortunately, the experimental findings associated with that claim have not been duplicated by any other investigators, which casts rather serious doubts upon the validity of the claim.6 With better data on the fine structure of the universe, it may be possible to say something more meaningful in this regard.

    Central to the entire stream of reasoning about temporal symmetry and an arrow of time is the question, “what is the early history of the universe?” Specifically this refers to the first 300,000 years or so after the big bang, which as an empirical matter, we will not be able to address until we have the technology necessary to build at least the next generation of gravity wave detectors. No EM detectors can tell us anything useful about the first 300,000 years of the universe’s history, because baryons and photons had not yet uncoupled and the universe was opaque to electromagnetic radiation. Ancillary questions, like “do stars precede galaxies or galaxies precede stars?” also have a significant bearing on the thermodynamic evolution of the universe.

    A better state of empirical knowledge about the time evolution of the system at the end of the universe would also be required to make any deeply meaningful arguments about an arrow of time and its reversal. For example, there’s what Jonathan refers to as the “Maxwell’s Demon Family Problem.” By this, we mean that at the end of the lifetime of the universe, there may be some unexpected emergent phenomena which would cause the time evolution of the system to behave in unexpected ways. Specifically, there might be emergent mechanisms for the transmission of information over increasing (rescaled) ranges of space-time. In this case, the (entropy production at) the event horizon of the universe (as a whole) becomes a significant contributor to the thermodynamic evolution of the system. As Freeman Dyson argues, even as the radius of the universe approaches infinity and the density approaches zero its temperature does not approach zero, and therefore the nature of the “struggle” between entropy and order in a “big crunch” may be characterized by entirely different time evolutions of the system than those with which we are familiar.

    At the micro-level, if one looks at a small number of particles in a closed system, then there are other complications which challenge the validity of the entire “arrow of time” concept. As an analogy, we could think about randomly shuffling a deck of cards. The time evolution of the system is such that in far less time than one needs to exhaust even local sources of energy, any configuration of the cards can be duplicated with a probabilistic certainty of one. Over longer ranges, asymptotic limits and dimensionality become important. For example, with a random walk in one dimension, the state of the system returns to the origin with infinite frequency as t??. There are no reflecting barriers, this is just a function of probability. In a random walk on a two dimensional lattice, the randomly walking point will return to the origin with a probability of 1 with an expected time of ?. In three dimensions, the probability is roughly 1/3 that the random walk returns to the origin, and as the dimensionality increases, the probability of returning to the origin even over a random walk of infinite length converges on zero as a limit. What this kind of exercise tells, is that the nature of the statistical approach taken to model various dynamical processes, will constrain the kinds of solutions which will appear.7

    Another problem with attempting to pin down a definitive arrow of time arises from the intrinsic characteristics of certain fairly common time series distributions. In economics we see this problem in clustered volatility. In a more general sense, any over-arching statements about an arrow of time would have to be able to satisfactorily explain heteroskedastic behavior, particularly at those troublesome beginning and end-points of the universe as an evolving dynamical system. In order to encompass this kind of behavior, one must be able to incorporate non-extensive statistical mechanics, which may or may not allow recovery of the standard Boltzmann expression. At a deeper level, it is likely that one has to deal with a series of non-commuting quantum operators. In this context not only do we lack the data necessary and sufficient for the drawing of conclusions about the arrow of time, but it is unclear whether our present methodology would support the “arrow of time” concept even if we had the data, should that data prove to be heteroskedastic in its distribution.

    To better understand the implications of heteroskedasticity, we can look again at the very early history of the universe (although the actual problem is that, at the moment, we cannot look there, but for the sake of argument, we will posit a hypothetical early evolution of the system). If any universal constants have changed since that time (something which we cannot know at the present time), those changes may have been both non-linear and heteroskedastic. In such a case, we might wish to say that the arrow of time then becomes ill-defined. Systems may progress through time evolutions which no longer represent extensive thermodynamics and the directionality of any so-called arrow of time is no longer clear. This situation is then complicated by the fact that the universe has undergone several phase transitions by symmetry breaking. As a result additional forces have emerged in each of these transitions. First, gravity separated out of the other forces, and it is for that reason that gravity wave detectors will be able to probe farther back in time than any other technology. Subsequently electromagnetic weak and strong nuclear forces separated. Not only does the early history of the universe matter a great deal as to whether there is, in fact an “arrow of time” with attendant temporal symmetry, but in the context of emergent properties, we cannot say for certain that there is no additional, (i.e., a fifth) force which might separate out during the future evolution of the universe.

    In the same way, heteroskedastic behavior at the end of the lifetime of the universe might lead the system through what are presently considered to be extremely low probability states, where the concentration of “anti-entropic” behavior might be exceptionally high. While we have no a priori statistical basis for expecting such mechanics, the very existence of heteroskedasticity cautions us that we cannot rule such behavior out. With respect to phase transitions, the situation is even more complicated because with phase transitions, there are well known behavioral regularities associated with the state prior to the transition, and an entirely different set of behavioral regularities associated with the post transition state. In between, of course, are the critical phenomena. However, phrase transitions are hardly ever mentioned in the context of an “arrow of time”, because once again from the “arrow of time” perspective, these distributions are extraordinarily ill-behaved and the so-called “arrow” itself becomes ill-defined.

    In closing, refinement of the big bang theory in recent decades (i.e., by Hoyle, Bondi and Gold) poses a number of deep challenges to the “arrow of time” metaphor. Present theory posits an initial state where the universe was very small and the constants very large with an expansion of several orders of magnitude taking place over a relatively brief period of time (less than a second). Over the next ten billion years, the universe expanded but with a slight deceleration due to gravity. Then, some one to two billion years ago the expansion of the universe began to accelerate again, and we do not know why. Is this heteroskedasticity? Is it a function of some kind of “arrow of time”? Nobody knows. Given our present state of cosmological ignorance, it would be, to say the least, premature, to accept any generalized arguments about “the arrow of time”.

  • Michael Cleveland

    I’ve restated this more correctly and succinctly since I find that I’m better at translating ideas into words in the light of day than in the small hours of the night. My apologies for dragging this out, but it’s an idea that seems to have been overlooked in the discussions of time and the arrow of time, and it should at least be stated correctly so it can be evaluated on whatever merit or lack thereof it may have.

    The arrow of time is implicit in the Lorentz-Fitzgerald-Einstein (LFE) contractions. Time and motion are complements. Any change of position in space corresponds to a change of position in time; the temporal change always in one direction only, from present to subsequent present, regardless of the direction of motion in space, its variables subject only to the degree of motion (i.e., speed). The relationship between motion and time is given by the complementary ratios

    (v/c)^2+(t_0/t)^2= 1

    from the LFE time dilation equation

    t=t_0/?(1-(v/c)^2 )

    * * * * *

    0< v/c <1

    While it might sometimes be awkward, all motion can be expressed in terms of v/c and the range of possible values for v/c encompasses all possible motion.

    Every value for v/c is associated with a unique positive (present toward future) temporal interval t (in relation to a designated rest frame interval t_0). There is no possible t that is not defined by some value of v/c . Since v/c cannot be greater than 1, there is no possible negative interval, so no possible negative time. Hence the arrow of time and its asymmetry are defined by the fundamental relationship between motion and time, and the science fiction concept of backward travel in time is dead (alas–a fate far worse for some of us than the consequences of murdering one’s own Grandfather).

    Duration

    Because motion and time are inseparably connected, and because all frames of reference are in motion in relation to some other frames, and all motion creates a calculable positive value for the interval t, then it is simpler and more reasonable to view duration as natural motion through time, as a complement to spatial motion, than to view past, present, and future as co-existent from some higher dimensional frame which requires still higher frames to provide for duration of structure–-ad infinitum. Three-dimensional objects are just that, but they “endure” because they move with the observer through space and time. They don’t “have” a fourth dimension; they move through (along?) the fourth dimension.

    This has interesting implications. There is no past, and there is no pre-existing future, though the future can be treated as a destination, whereas the past, as a corporeal structure, does not exist at all. The Universe exists in a constantly moving local “present,” an infinitesimally thin 3-dimensional surface with a 4th dimensional vector. All record of the past exists as the effect of prior causative motion and interaction which cumulatively form the present (and fleeting) state of the Universe. It might be valid to describe “now” as the edge of the universe, but it’s probably more accurate to state that “now” is the Universe (and there is nothing in that statement that is negated by issues of relativity and simultaneity–in fact the “now” Universe fits perfectly into those problems). If this is true, then Entropy is a motion-related marker, not a cause, and Block Time is a myth.

    When I wrote the original post the other night, I confess that I was deeply fascinated and distracted by the impossibility of negative values for length, mass, and motion (speed, to avoid confusion over negative vectors). I’m afraid that fascination, combined with a considerable degree of fatigue, translated into a rather illogical misstatement. I hope this puts it more clearly.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »