Arrow of Time FAQ

By Sean Carroll | December 3, 2007 9:13 am

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

My theory of (brane gasses/loop quantum cosmology/ekpyrosis/Euclidean quantum gravity) provides a very natural and attractive initial condition for the universe. The arrow of time just pops out as a bonus.

I doubt it. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.

What is the entropy of the universe?

We’re not precisely sure. We do not understand quantum gravity well enough to write down a general formula for the entropy of a self-gravitating state. On the other hand, we can do well enough. In the early universe, when it was just a homogenous plasma, the entropy was essentially the number of particles — within our current cosmological horizon, that’s about 1088. Once black holes form, they tend to dominate; a single supermassive black hole, such as the one at the center of our galaxy, has an entropy of order 1090, according to Hawking’s famous formula. If you took all of the matter in our observable universe and made one big black hole, the entropy would be about 10120. The entropy of the universe might seem big, but it’s nowhere near as big as it could be.

If you don’t understand entropy that well, how can you even talk about the arrow of time?

We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.

Are black holes the highest-entropy states that exist?

No. Remember that black holes give off Hawking radiation, and thus evaporate; according to the principle just elucidated, the entropy of the thin gruel of radiation into which the black hole evolves must have a higher entropy. This is, in fact, borne out by explicit calculation.

So what does a high-entropy state look like?

Empty space. In a theory like general relativity, where energy and particle number and volume are not conserved, we can always expand space to give rise to more phase space for matter particles, thus allowing the entropy to increase. Note that our actual universe is evolving (under the influence of the cosmological constant) to an increasingly cold, empty state — exactly as we should expect if such a state were high entropy. The real cosmological puzzle, then, is why our universe ever found itself with so many particles packed into such a tiny volume.

Could the universe just be a statistical fluctuation?

No. This was a suggestion of Bolzmann’s and Schuetz’s, but it doesn’t work in the real world. The idea is that, since the tendency of entropy to increase is statistical rather than absolute, starting from a state of maximal entropy we would (given world enough and time) witness downward fluctuations into lower-entropy states. That’s true, but large fluctuations are much less frequent than small fluctuations, and our universe would have to be an enormously large fluctuation. There is no reason, anthropic or otherwise, for the entropy to be as low as it is; we should be much closer to thermal equilibrium if this model were correct. The reductio ad absurdum of this argument leads us to Boltzmann Brains — random brain-sized fluctuations that stick around just long enough to perceive their own existence before dissolving back into the chaos.

Don’t the weak interactions violate time-reversal invariance?

Not exactly; more precisely, it depends on definitions, and the relevant fact is that the weak interactions have nothing to do with the arrow of time. They are not invariant under the T (time reversal) operation of quantum field theory, as has been experimentally verified in the decay of the neutral kaon. (The experiments found CP violation, which by the CPT theorem implies T violation.) But as far as thermodynamics is concerned, it’s CPT invariance that matters, not T invariance. For every solution to the equations of motion, there is exactly one time-reversed solution — it just happens to also involve a parity inversion and an exchange of particles with antiparticles. CP violation cannot explain the Second Law of Thermodynamics.

Doesn’t the collapse of the wavefunction in quantum mechanics violate time-reversal invariance?

It certainly appears to, but whether it “really” does depends (sadly) on one’s interpretation of quantum mechanics. If you believe something like the Copenhagen interpretation, then yes, there really is a stochastic and irreversible process of wavefunction collapse. Once again, however, it is unclear how this could help explain the arrow of time — whether or not wavefunctions collapse, we are left without an explanation of why the early universe had such a small entropy. If you believe in something like the Many-Worlds interpretation, then the evolution of the wavefunction is completely unitary and reversible; it just appears to be irreversible, since we don’t have access to the entire wavefunction. Rather, we belong in some particular semiclassical history, separated out from other histories by the process of decoherence. In that case, the fact that wavefunctions appear to collapse in one direction of time but not the other is not an explanation for the arrow of time, but in fact a consequence of it. The low-entropy early universe was in something close to a pure state, which enabled countless “branchings” as it evolved into the future.

This sounds like a hard problem. Is there any way the arrow of time can be explained dynamically?

I can think of two ways. One is to impose a boundary condition that enforces one end of time to be low-entropy, whether by fiat or via some higher principle; this is the strategy of Roger Penrose’s Weyl Curvature Hypothesis, and arguably that of most flavors of quantum cosmology. The other is to show that reversibilty is violated spontaneously — even if the laws of physics are time-reversal invariant, the relevant solutions to those laws might not be. However, if there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve. The trick of course, is to implement such a strategy in a well-founded theoretical framework, one in which the particular way in which the universe evolves is by creating regions of post-Big-Bang spacetime such as the one in which we find ourselves.

Why do we remember the past, but not the future?

Because of the arrow of time.

Why do we conceptualize the world in terms of cause and effect?

Because of the arrow of time.

Why is the universe hospitable to information-gathering-and-processing complex systems such as ourselves, capable of evolution and self-awareness and the ability to fall in love?

Because of the arrow of time.

Why do you work on this crazy stuff with no practical application?

I think it’s important to figure out a consistent story of how the universe works. Or, if not actually important, at least fun.

CATEGORIZED UNDER: Science, Time
  • andy.s

    It’s 9:06 am and Internet Explorer says you published this at 9:13 am, violating the Arrow of Time.

    Before causality re-asserts itself, I’ve got a question: does a wave function of one particle have an entropy? i.e., if a particle is in a superposition of states is there an entropy associated with it?

    And if it does, and the entropy gets eliminated when the particle is measured, where does it go?

  • rooshi

    Thank you for that… I have been pondering over Entropy and the Second Law in a very layman role (my engineering degree notwithstanding) for a very long time now.

    You helped clear up several niggling doubts I had about the fundamental meaning and the macroscopic implications of the second law.

  • Anonymous

    I still don’t understand why CP violation (and hence T violation) can’t play a role in this. You present the problem as a problem with time reversal, then you mysteriously say that CPT (and not just T) is what matters. Why?

  • http://www.pieter-kok.staff.shef.ac.uk PK

    Very nice post, indeed!

    I also have the same question as Anonymous: why does thermodynamics care about CPT, rather than T alone?

    To andy.s:

    “does a wave function of one particle have an entropy? i.e., if a particle is in a superposition of states is there an entropy associated with it?”

    Yes it does. For pure states (which include the coherent superpositions) the entropy is zero, while for mixed states “rho”, the (von Neuman) entropy S is given by:

    S(rho) = -tr[rho ln(rho)]

    The von Neuman entropy is a measure of our ignorance of the state rho. For pure states there exists a basis in which a measurement always gives the same outcome, which is why in this case S=0.

  • Khurram

    Hi. Good post Sean!
    You write:

    1. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past.

    I can understand why entropy was actually lower in the past.
    But I don’t get how the second law “if taken at face value” implies the opposite-that entropy should be higher in the past?

    2. Also, what role does gravity play in the arrow of time?
    Does gravity decrease entropy into the future due to gravitational attraction/clumping of matter?

    Thank you

  • Pingback: Arrow of time and origin of universe « Entertaining Research

  • http://countiblis.blogspot.com Count Iblis

    T can be an exact unbroken symmetry of Nature despite CP violation. This is explained in this article.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    About T-violation and the arrow of time: the simple answer is that the weak interactions are perfectly unitary, even if they are not T-invariant. They don’t affect the entropy in any way, so they don’t help with the arrow of time.

    A bit more carefully: if you did want to explain the arrow of time using microscopic dynamics, you would have to argue that there exist more solutions to the equations of motion in which entropy grows than solutions in which entropy decreases. But CPT invariance is enough to guarantee that that’s not true. For any trajectory (or ensemble of trajectories, or evolution of a distribution function) in which the entropy changes in one way, there is another trajectory (or set…) in which the entropy changes in precisely the opposite way: the CPT conjugate. Such laws of physics do not in and of themselves pick out what we think of as the arrow of time.

    People talk about the “arrow of time of the weak interactions,” but ask yourself: in which direction does it point? There just isn’t any direct relationship to entropy.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Khurram, it’s not the Second Law that predicts the entropy was higher in the past — it’s the logic underlying attempts to derive the Second Law without explicit reference to a low-entropy boundary condition in the past.

  • Jasper vH

    I was wondering if you might know if there is a quantum version of the Fluctuation Theorem.
    I know that in classical systems it can quantify (under certain assumptions) the probability of the entropy flowing opposite to the direction that is stated by the second law.

    Most Fluctuation Theorem articles are written by D.J. Evans if you want to look it up.

  • http://www.pieter-kok.staff.shef.ac.uk PK

    “the weak interactions are perfectly unitary, even if they are not T-invariant”

    Of course! Thanks Sean.

  • Thomas

    Re: andy.s, PK:

    S(rho) = -tr[rho ln(rho)]

    (give or take a factor of k_B…)

    To andy.s – this is surprising if you think of a wavefunction represented in a basis – |psi> has lots of coefficients, which can take on many different values: so why isn’t there in entropy in the coefficents? The resolution is that, the basis representation is not meaningful by itself – you can easily change it without changing the system, by rotating the basis by U. In the absence of any preferred basis, there is no meaning to rotating the state, or equivalently rotating the observer; the thing stays the same no matter how you look at it.

    (In the Copenhagen picture) measurement breaks this symmetry; a measuring process involves a preferred basis, the eigenbasis of the observable. (Say you’re measuring spin component – you *choose* which is way is spin-up, and this breaks a symmetry of there being no preferred direction). So in that context, the coefficents *relative to this one basis* suddenly become meaningful, informationful. If there is a superposition, then the measurement is non-deterministic, so we’ve got now got *probabilities* to work with. In the density matrix formalism PK brough up this is the mixed state ? (rho) which goes in the von Neumann formula. It’s really nothing more than Shannon or Gibbs entropies applied to possible measurement outcomes.

    So yes, the coefficents do contain entropy – when you measure them!

    Thomas S.

  • Aaron Sheldon

    The problem is with the CPT theorem, or rather its assumption, that translation along time like geodesics can be represented by a real parameterizaton of a group of complex unitary operators of the form exp(iHt). Only in a limited set of manifolds can time translation be represented by this unitary group.

    For more general sets of manifolds (curved space time) the representation of time translation is not a simple parameterized unitary group, in fact it neither has a simple parameterization nor does it contain unitary operators at all, however each these groups are dense on an characterstic group of unitary operators, corresponding to the fundamental Hamiltonian operator of the manifold, which can then be mapped to the stress-energy of the manifold.

    In more general manifolds the bijective nature of translations along time like geodesics is lost, so that the operators are no longer invertible, but still have unit Banach norm. This is only possible for linear operators on infinite Hilbert spaces.

  • http://backreaction.blogspot.com/ B
  • andy.s

    …if you think of a wavefunction represented in a basis – |psi> has lots of coefficients, which can take on many different values: so why isn’t there in entropy in the coefficents?

    Yeah, that’s what I was wondering.

    The resolution is that, the basis representation is not meaningful by itself – you can easily change it without changing the system, by rotating the basis by U. In

    If I measure a spin in the z-basis, it can be up or down, but in the x-basis it’s
    (up + down)/sqrt(2)

    In the density matrix formalism PK brough up this is the mixed state ? (rho) which goes in the von Neumann formula. It’s really nothing more than Shannon or Gibbs entropies applied to possible measurement outcomes.

    So yes, the coefficents do contain entropy – when you measure them!

    OK, I need to read up more on density matrices to sort that out. My question did actually relate to the topic of this thread, but I need to learn a bit more about the subject to ask it properly.

  • John Merryman

    This still doesn’t answer a question I keep raising about time; Does time cause change/motion, or does change/motion cause time? If it is the former, then we are traveling along this dimension of time from past events to future ones, but if it is the later, then as change/motion adjusts circumstances, former ones are replaced by current ones, so it is the illusion of dimension going from future potential to past circumstance. Much as tomorrow becomes yesterday, as the earth rotates relative to the sun.

  • TimG

    Hi Sean. I find this arrow of time stuff really fascinating — kudos on the excellent FAQ. A few questions:

    (1) Weren’t you the one who talked in a previous post about how not everything has to “happen for a reason” when we’re talking about the universe as a whole (even though within the universe we have a notion of cause and effect)? So why would we think there would be a reason for the universe to start in a low entropy state, as opposed to that being “just the way it is”?

    (2) If you come up with an idea for why the universe starts in a low entropy state (and I know you’ve suggested some ideas in this vein), is there any hope for an empirical test? What could such a test possibly look like? Or are we just hoping to find a theory that’s so “elegant” that the scientific community accepts it without experimental proof?

    (3) One thing that’s always confused me about entropy: As you say, it’s the number of microstates that constitute a macrostate (or really the log of that number). But isn’t the definition of a macrostate somewhat dependent on us? I mean, if we developed a new kind of experiment that could distinguish two previously indistinguishable microstates, they wouldn’t be the same macrostate anymore, right? In that case, does the second law mean entropy will increase regardless of how we define the macrostates (as long as we keep those definitions consistent)– or only for some preferred assignment of microstates to macrostates?

  • tyler

    A superb and very useful post, much appreciated. I particularly applaud your nuanced explanation of questions for which different interpretations offer different answers. That’s always a good thing.

    Penrose…no matter where I turn in learning about physics, that guy is there, with some brilliant, difficult idea that’s radically different from the way other experts see it. An interesting character, to say the least.

  • tyler

    TimG, #3 is a great question, expresses well something I’ve never been able to verbalize.

  • randall

    posts like these save the internet from us all. very informative, many thanks!

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    TimG, let’s take a stab at your questions:

    1) It’s certainly possible that a low-entropy initial condition is just the way the universe is, and I’m careful to emphasize that possibility in talks and papers. But it’s also possible that it’s a clue to a deeper explanation. The fact that some things “just are” doesn’t necessarily mean that it’s always clear which things those are.

    2) At the moment I don’t know of any experimental tests. But my aim is a bit lower: I just would like to have at least one model that is consistent both with what we know about the laws of physics and with what we observe about the universe. If we had more than one such model, experiment would be necessary to decide between them.

    3) How we do the coarse-graining to define which microstates are macroscopically equivalent is a classic question. My personal belief is that the choices we make to divide the space of states up into “equivalent” subspaces are not arbitrary, but are actually determined by features of the laws of physics. (For example, the fact that interactions are local in space.) The project of actually turning that belief into a set of rigorous results is far from complete, as far as I know.

  • http://tyrannogenius.blogspot.com Neil B.

    As I argued before, “time” as we experience it is not mathematically modelable anyway. Sure, there’s diagrams plotting things as a function of time, world lines etc, but let’s get real (heh): that’s like a tinkertoy construction sitting on a table, not like something we “move” along, with a “present” that we live in. (Pls. don’t blithly reach for “illusion” talk, OK? Yeah, maybe, but don’t make it so easy for yourself…)

    One weird thing about time-reversability: Suppose I could intervene in a time-reversed world W’. I could deflect a bullet that (to me) had popped out of a tree it “hit”, and then – instead of reentering the gun barrel, it would smack into maybe some other tree that it shouldn’t be “coming out of” from the point of view of W’. That would be weird, and it would ruin the whole “past” of W’. Well, we think our own past has already happened, so what if (if time flow really is relative) some Being did that to us, how could it possibly alter our past? Food for thought. I figure, worlds either can’t be interved in from the outside, or time flow is absolute.

    Also, REM that if you believe the wave function is “real”, then time flow is preferred: a WF expands out from an emission point and then “vanishes” when absorbed – that would look wrong if emission and absorption were interchanged.

    “tyrannogenius”

  • Low Math, Meekly Interacting

    Cool post! Thanks Dr. Carroll!

    I’ve read in several places that the “wavefunction of the universe”, one that satisfies the Wheeler-deWitt equation, anyway, is essentially atemporal. The universe just “is”, and talk about “initial” and “final” and everything in between needn’t apply. I’ve also read it somehow follows that time is an “emergent” property of this wavefunction.

    To put it very crudely, is it reasonable to conclude that we can only shed our “temporal chauvinism” by ditching time altogether? I’m completely dumbfounded by the notion of an “emergent” anything in the absence of a temporal measure by which I can determine something has “emerged” from something else. But there is this “emergent time” idea out there that I’ve encountered, and I wonder if you can comment on it!

  • http://pantheon.yale.edu/~pwm22/ Peter Morgan

    Why should Physics want to explain a contingent fact about a particular initial condition? Given an initial condition on a time-like hypersurface as a mathematical model for the universe, we can determine in which direction a dynamics (supposed here to be deterministic, whether applied to a quantum or to a classical state) causes the entropy to increase or to decrease. We can determine the answer to the same question on different time-scales and perhaps obtain different answers, but still we would get a graph of the evolution of entropy over time. We could also determine the answer to the same question for other dynamics and perhaps again obtain different answers.
    Put differently, the arrow of time is determined by initial conditions, together with other contingent facts; “explain” the initial conditions and you’ve explained the arrow of time, but “explain” the arrow of time and you have explained one bit of the total initial conditions.

    You may not get this second comment, but I worry about foundational arguments that invoke entropy extensively, partly because entropy is not a Lorentz invariant quantity. The Lorentz invariant quantum fluctuations of the vacuum make no contribution, for example. Entropy is the thermodynamic dual to thermal fluctuations, what is the thermodynamic dual to quantum fluctuations? Since an accelerating observer sees thermal fluctuations where an inertial observer sees only quantum fluctuations (the Unruh effect), presumably entropy is different also for relatively accelerating observers (and presumably also for observers in different gravitational environments).
    All the best with your metaphysics, nonetheless.

  • Low Math, Meekly Interacting

    I guess I could pose my question more succinctly: If one has the explanation for the emergence of time per se, is it reasonable to expect one might get the arrow of time “for free”?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Folks, please don’t repost long comments from other threads.

    Bee, I’ll fix the venue of the conference.

    Low Math, the emergence of time from quantum gravity is certainly an interesting problem. It’s not clear whether it has important implications for the evolution of entropy — maybe, maybe not, one would have to make some explicit construction. It’s hard to see how, as it remains true that the “early” universe is in a very special state, nowhere near equilibrium, and rapidly evolves into something else.

  • Pingback: Seed's Daily Zeitgeist: 12/4/2007 - General Science

  • http://wonka.physics.ncsu.edu/~tmschaef/ thomas

    I have no doubt that the question “Why is the entropy
    of the universe what it is?” is interesting. But I
    don’t think it is correct to suggest that there is
    a deep mystery behind the second law.

    The Zermelo/Loschmidt type objections (“How can you
    get the T-violating Boltzmann equation from T-reversal
    invariant dynamics?”) was already correctly answered
    by Bolzmann himself. The Boltzmann equation involves
    a suitable limiting process (N->infty etc,), and is a
    statitical statement, correct for “almost all” initial
    conditions.

    You can do a computer experiment, pick initial conditions
    for N billiard ball and evolve forward in time. You will
    find that the entropy increases with time. Then you stop
    the computer, reverse all momenta, and evolve forward.
    Now you find that entropy decreases. Why? The T-reversed
    initial conditions are very special, they involve subtle
    correlations that “remember” the low entropy initial
    state.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    thomas, I’m afraid that’s just not right, or at least dramatically misleading. For “almost all” initial conditions, you are in thermal equilibrium, and the entropy doesn’t change at all. The number of initial conditions for which the entropy increases is exactly the same as the number for which it decreases. In fact they are in one-to-one correspondence, given by CPT conjugation.

    As you say, the “initial” conditions you get by starting with a low-entropy state, evolving it to high entropy, and taking the T-inverse are indeed very special. In fact, they are precisely as special as the low-entropy conditions you started with in the first place.

    The way you can get the T-violating Second Law from T-invariant dynamics is to have T-violating boundary conditions, in particular a low-entropy state near the Big Bang. We still don’t know why the universe is like that.

  • WhatMeWorry

    I don’t understand this: “our universe would have to be an enormously large fluctuation”. I thought you guys were working to simplify everything to an equation or two, or a concept or so, something truly elemental. So why must the appearance of that fundamental thing, require an enormously large fluctuation? Why wouldn’t it require just an everyday (so to speak) burp? I know what I want to ask, but maybe didn’t succeed. Pardon me in advance.

  • Aaron F.

    Nice post! One question: if causality were assumed to be a fundamental law of nature, would the arrow of time still be a problem? I’m thinking of things like Erik Zeeman’s paper “Causality Implies the Lorentz Group” and Ambjørn, Jurkiewicz, and Loll’s causal dynamical triangulation approach, both of which seem to ride pretty far on little more than the assumption of causality.

  • Chris W.

    Aaron F.,

    Your mention of Zeeman’s paper led me to a more recent and quite interesting review paper (albeit fairly dense technically) in which his result* is discussed:

    Algebraic and geometric structures of Special Relativity (math-ph/0602018)

    (* arrived at independently by A.D.Alexandrov)

  • Chris W.

    PS: The connection to causal sets (Sorkin and collaborators) is also worth mentioning. I believe the essential content of Zeeman’s result plays a central role in causal set theory.

  • Not Required

    “Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

    Trust me, it is…”

    I do trust you. In fact, I’m completely amazed that there are people who doubt this. But I think that this point is a major reason for the widespread failure to see how important all this is. Perhaps you could expand on this part of the FAQ? What is the reason behind this disastrous misunderstanding of the Second Law?

  • http://www.jessemazer.com Jesse M.

    thomas wrote:
    You can do a computer experiment, pick initial conditions
    for N billiard ball and evolve forward in time. You will
    find that the entropy increases with time. Then you stop
    the computer, reverse all momenta, and evolve forward.
    Now you find that entropy decreases. Why? The T-reversed
    initial conditions are very special, they involve subtle
    correlations that “remember” the low entropy initial
    state.

    If your simulation is based on reversible classical laws which satisfy Liouville’s Theorem (which basically says that the dynamics conserve volume in phase space over time), then the T-reversed initial conditions are no more or less special than the original initial conditions which caused the entropy to increase. To put it another way, if you picked your initial conditions randomly using a uniform probability distribution on the entire phase space, then the the probability that your initial condition would have some lower entropy S and then evolve to a state with a higher entropy S’ would be precisely equal to the probability that your initial condition would have the higher entropy S’ and evolve to a lower entropy S in the same amount of time.

  • Aaron F.

    Thanks, Chris W.! The review paper you linked looks really interesting; I’m definitely saving it for future reference!

  • Jason Dick

    Sean,

    I’ve been wondering about this for a little while, and I’m really confused as to why you state that it makes no sense for the universe to be a quantum fluctuation out of equilibrium. Now, granted, I certainly have not thought about this as much as you have, but I have yet to understand why. Here is my really basic picture:

    Consider two different systems. One is composed of many particles, the other few. The system composed of many particles will necessarily experience only minuscule departures from equilibrium, while the system of few particles will experience much larger departures. It’s not really unexpected at all to find a tiny region of the universe where the entropy is very small at any given time. So if a random fluctuation our of equilibrium is to produce a region of the universe like our own, then it makes the most sense that such a random fluctuation will be a small-scale fluctuation: it cannot require a large fluctuation out of equilibrium over a large volume. But from this small volume, a massive volume must be generated.

    This seems, at least on the surface, to perfectly describe inflation: inflation can be started when a particle field with the right properties obtains a nearly uniform value over a minuscule region of space, and from this minuscule region of space, a massively large region can be generated, with massively higher entropy than could have been in the original patch if it were in equilibrium before inflation began.

    But, unfortunately, I don’t see that this picture says anything at all about the arrow of time.

  • Gavin Polhemus

    Sean,

    Inflationary models say that our observable universe was in thermal equilibrium before inflation, which is why the universe is so isotropic and homogeneous. Thermal equilibrium is the highest entropy state given the constraints on the system. In this case the constraints include the universe’s size, which was very small before inflation. Is the question of why the universe started out in a low entropy state equivalent to the question of why the universe started out so small? If we could explain the initial smallness, would we be done?

  • BlackGriffen

    There is a preferred basis for calculating thermodynamic entropy, I suspect, and it’s the only one I’ve ever heard used – the eigenbasis of the Hamiltonian, or states of definite energy. What makes this basis special? Well, I can think of a couple of hand wavy arguments for what would make this basis special. In no particular order: the study of thermodynamics centers on systems in some kind of equilibrium, and in quantum mechanics that means the eigenstates of the Hamiltonian, (for example the condition for zero entropy change during a process is that the system always be infinitesimally close to equilibrium so they are obviously related concepts); the other observables of which the entropy is a function (like volume) are usually not considered as quantum observables but as classical ones, even if the system is exchanging them with a bath (like when a weight sits atop a movable piston); and because systems that are in “thermal contact” are normally considered to be exchanging energy/entropy. Defining thermodynamic entropy this way also has the advantage that, at least for bound states, you’re working with a discreet basis so you don’t have oddities like negative information, even if it pops up only in theory.

    That is, as far as I can tell, the only thing that distinguishes thermodynamic entropy from Shannon style information entropy.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Jason, what you describe is something like what Jennifer Chen and I proposed. A fluctuation leading to inflation is a promising way to get something like our universe. However, it can’t be in equilibrium. If it were, every process would happen just as frequently as its time-reversal, and low-entropy fluctuations are vastly preferred.

    Gavin, the universe was certainly not in thermal equilibrium before inflation. If it were, it wouldn’t evolve into something else. At best, the matter degrees of freedom were close to equilibrium, but that’s not very relevant when gravity is so important.

    Note also that the small size of the universe is not an a priori constraint, it’s part of what needs to be explained. Why was the universe so small?

  • http://www.mycupoftea.se Magnus Borgh

    Jasper vH asked:

    I was wondering if you might know if there is a quantum version of the Fluctuation Theorem.
    I know that in classical systems it can quantify (under certain assumptions) the probability of the entropy flowing opposite to the direction that is stated by the second law.

    I have skimmed the thread, and didn’t see any answer to this question. My apologies if I missed something.

    The answer (to the best of my knowledge anyway) is that this is an open question in current research in quantum thermodynamics and statistical physics. I know of at least one research group working on finding the quantum-mechanical corrections to the fluctuation theorem.

  • http://quthoughts.blogspot.com Joe

    Hi Sean,

    I really enjoyed the FAQ, so thanks for that. I was hoping that you might be able to answer a quick question for me:

    I’m a little concerned about how entropy is defined here. While the entropy of a pure state is 0, and for a mixed state non-zero, I’m not entirely convinced that’s a good measure for what we observe. In order to calculate the entropy of a state, we need information about the full state. If our measurements of entropy are in some sense local, then entanglement in the state of the universe will lead to a non-zero entropy being measured (despite the fact that the actual entropy is 0). Over sufficiently long time scales you could still see periodic behaviour, but you would certainly see extended periods when the entropy grows from 0.

    So I was wondering, how do you overcome the difference between some kind of local observation of ‘entropy’ and the actual entropy of the universe in this work?

    Thanks!

  • Jason Dick

    Jason, what you describe is something like what Jennifer Chen and I proposed. A fluctuation leading to inflation is a promising way to get something like our universe. However, it can’t be in equilibrium. If it were, every process would happen just as frequently as its time-reversal, and low-entropy fluctuations are vastly preferred.

    Okay, I went and found the two papers you two co-authored that are referenced on the arxiv and skimmed them. So it sounds like you are saying something very similar to my vague idea. But I’m still not understanding something. In gr-qc/0505037 you state:

    The entropy of the proto-inflationary patch, then, is fantastically smaller than the en-tropy of our current Hubble volume, or even than that of our comoving volume at early times before there were any black holes. This is in perfect accord with the Second Law of Thermo-dynamics, since the entropy is increasing. But it is hard to reconcile with the idea that we should find an appropriate proto-inflationary patch within the randomly fluctuating early universe. If we are randomly choosing conditions, it is much easier to choose high-entropy conditions than low-entropy ones; hence, it would much more likely to simply find a patch that looks like our universe today, than to find one that was about to begin inflating.

    This point is somewhat counterintuitive, and worth emphasizing. Despite their vast differences in size, energy, and number of particles, the proto-inflationary patch and our current universe are two configurations of the same system, since one can evolve into the other. There are many more ways for that system to look like our current universe than to be in a proto-inflationary configuration.

    Later you invoke fluctuation out of de Sitter space to fix the problem, so that the low entropy density of de Sitter space means that the small, low-entropy fluctuation is favored over the large, high-entropy fluctuation.

    What I don’t understand is why you need to resort to the properties of what this region is fluctuating out of to make it depend upon the size of the eventual region? Intuitively I would expect that low-volume fluctuations would be pretty strongly preferred no matter the previous state.

  • SLC

    I am probably well out of date here but it was my understanding that it was found in the 1960s that CP was violated in weak interactions. If this is the case, then T must also be violated in order for CPT to be conserved.

  • John Merryman

    Sean,

    Folks, please don’t repost long comments from other threads.

    Sorry about that. I did it to clarify the question I asked in #16. Whether time is caused by motion, or motion is caused by time. I realize the standard assumption is that motion is an effect of the dimension of time, but the only explanation I can get from anyone is Jason saying that’s how the equations are written.

    It seems the choice is between time as dimension being real and change being an illusion, or change being real and the dimension of time being an illusion. I realize I don’t have much in the way of complex mathematics to support my position, but out here in the reality I live in, change is real and the dimension of time is a chain of narrative to be distilled out of the general chaos, so it seems to me that change causes time. Instead of physical reality traveling along this dimension from past to future, it creates it and events go from future potential to past circumstance.

  • ObsessiveMathsFreak

    Are these really the kinds of questions physicists should concern themselves with. The arrow of time sounds like a distinctly meta physical argument. Shouldn’t science concern itself with observables?

  • http://cvjugo.blogspot.com cvjugo

    My apologies in advance for this layman’s question. Does probability itself exist because of the Arrow of Time?

  • Pingback: Less than a Week Left « blueollie

  • http://countiblis.blogspot.com Count Iblis

    Is it possible to have a more or less T-symmetrical situation about a minimum of the entropy? I.e. could that piece of the universe that underwent inflation if you run it forward also undergo inflation if you run it backward?

    In the backward running universe the observers will, of course, experience time evolution in the opposite global direction as we do. So, you just have two sectors glued together by the low entropy state. Observers in both sectors will point to the same low entropy patch as the origin of their universe.

  • http://physicsmuse.wordpress.com/ Sandy

    You can turn an omelet into an egg if you feed it to a chicken. Isn’t the concept of a closed system artificial? Unless the universe is a closed system. A cup falling off the counter is not a closed system. In open systems there are both increases and decreases in entropy. When asking why the underlying laws of physics can be run forward and backward in time, but not macroscopic behavior I am not sure what you are referring to. Some actions at the macroscopic level can be computed forward and backward in time without difficulty, though we don’t observe them that way. But, at the quantum level, we probably do observe them going both ways? Are you comparing observables to observables, or computations to computations?

    That interactions at the quantum scale can be run forward and backward in time without any problem, indicates that relationships between quantum entities are outside time as time is experienced at the macroscopic level. That conclusion also applies to other activities at the quantum level, such as entanglement. So, to me that is the question, why quantum relationships escape the arrow of time constraints the rest of us have. Using cosmology and initial conditions isn’t enough of an explanation because that was also the initial conditions for the quantum entities.

  • http://magicdragon.com Jonathan Vos Post

    Excellent thread! See also:

    Philip Vos Fellman, Jonathan Vos Post, “Time and Classical and Quantum Mechanics and the Arrow of Time” WP# 01-2005-01, Ongoing Research Papers and Conference Proceedings of the International Business Department, Southern New hampshire University,
    IBML Working Papers Series.

    Paper presented at the annual meeting of the North American Association for Computation in the Social and Organizational Sciences, Carnegie Mellon University, June 27-29, 2004.

    Abstract: In thinking about information theory at the quantum mechanical level, our [the authors'] discussion, largely confined to Jonathan’s back yard, often centers about intriguing but rather abstract conjectures. My personal favorite, an oddball twist on some of the experiments connected to Bell’s theorem, is the question, “is the information contained by a pair of entangled particles conserved if one or both of the particles crosses the event horizon of a black hole? It is in this context, and in our related speculation about some of the characteristics of what might eventually become part of a quantum mechanical explanation of information theory that we first encountered the extraordinary work of Peter Lynds. This work has been reviewed elsewhere, and like all novel ideas, there are people who love it and people who hate it. One of the main purposes in having Peter here is to let this audience get acquainted with his theory first-hand rather than through an interpretation or argument made by someone else. In this regard, I’m not going to be either summarizing his arguments or providing a treatment based upon the close reading of his text. Rather, I will mention some areas of physics where, to borrow a phrase from Conan-Doyle, it may be an error to theorize in advance of the facts. In particular, I should like to bring the discussion to bear upon various arguments concerning “the arrow of time.” In so doing, I will play the skeptic, if not the downright “Devil’s Advocate” (perhaps Maxwell’s Demon’s advocate would be more precise) and simply question why we might not be convinced that there is an “arrow” of time at all.

  • http://magicdragon.com Jonathan Vos Post

    As to Time being an illusion, albeit a persistent one, before Einstein we had McTaggart.

    John McTaggart Ellis McTaggart [1866-1925] was a Fellow of Trinity College, Lecturer in Moral Sciences, and a Nonreductionist. He was the author of “Studies in Hegelian Cosmology. The Philosophy of Hegel” [Dissertation, 1898; 1901; Garland, 1984]. This work explored application of a priori conclusions derived from the investigation of pure thought to empirically-known subject matter; human immortality; the absolute; the supreme good and the moral criticism; punishment; sin; and the conception of society as an organism. McTaggart was controversial for claiming that time was unreal: “The Nature of Existence” [Cambridge University Press,
    1921]; “The Unreality of Time” [Mind, vol. XVII].

  • CarlN

    “Why was the entropy of the early universe so small?” Because it started from nothing (zero entropy). “Why was the size of the early universe so small?” Because it started from nothing.
    :-)

  • efp

    Something I’ve been wondering: is there a relativistic definition of entropy, and of the second law (i.e., something that can be expressed in terms of invariants)? I’m having trouble even deciding what form it would take. It doesn’t even seem to me like the state of an extended system can be a frame-independent concept. References would be welcome.

  • Pingback: A Waste-Book · My del.icio.us bookmarks for December 4th

  • http://www.cthisspace.com Claire

    “The arrow of time is hot, baby”

    Too right!

    Hya Sean and others,

    Talking of time, well a while ago when I first posted here, I came out with something like, “There’s nothing like the real world” (my first post here) and you know what? That’s all I said!

    I am re-introducing myself and say, I have followed this blog for a while and I like what I see, if you know what I mean, then that’s all right with me mate! I am not going to post an awfull lot, just read.

    I am one of those arm chair idiots who, having studied elementary level in it a few years ago, ends up reading about physics and science as a hobby (but I am actually in love with it really).

    One good thing to look at is the physics of the brain with regard to the arrow of time. Are there are any time arrows in the brain? (I could use more complex wording but…) to what extent could SR be temporal oriented?

    So, now I am just wondering, where has the arrow of time gone while posting here…

    …ah, got it, it’s just here!

    Yours

    Claire

  • Jason Dick

    You can turn an omelet into an egg if you feed it to a chicken.

    Nope, doesn’t work: the egg that the chicken makes won’t be made from the molecules that made up the original egg. Rather, some components of the omelet will be made use of for making the egg, some will be used for other metabolic purposes. Many components of omelet will pass through the chicken undigested (since a chicken’s digestive system didn’t evolve to digest eggs). Some components of the new egg will come from other food sources.

  • http://wonka.physics.ncsu.edu/~tmschaef/ thomas

    Regarding Loschmidt (“How can you get the T-violating Boltzmann
    equation from T-reversal invariant dynamics?”), see comments
    28, 29 (Sean), 35 (Jesse):

    The difference between low entropy initial conditions and
    time-reversed initial conditions (evolve low S forward, then
    time reverse) is that the former are robust (stable against
    small perturbations (noise, error, loss of information), while
    the latter are extremely sensitive to small perturbations. In
    terms of Liouville’s theorem, the former occupy a smooth volume
    in phase space (stable under coarse graining) while the latter
    live in a very highly filamented part of phase space (not stable
    under coarse graining).

    In hist post, Sean allows for the fact that you can “derive it [the
    2nd law] under some reasonable-sounding assumptions” but goes on
    to say that these “reasonable-sounding assumptions are typically
    not true of the real world”. I fail to understand what this means;
    in my kitchen these assumptions appear to be satisfied, and I think
    my kitchen is pretty typical of the real world.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    thomas, of course there is a difference between the two sets of states: after all, one is low-entropy, and one is high-entropy! But they are equally likely; they occupy precisely equal volumes in phase space. That’s just Liouville’s theorem. A randomly-chosen microstate is equally likely to be in either set.

    The thing that is untrue in your kitchen is the set of assumptions used to prove the H-theorem, not the Second Law itself. In particular, there certainly are correlations between momenta of the molecules in your kitchen — precisely those correlations that reflect the system’s lower-entropy past, as you yourself just explained. The reason why the Second Law works is not because molecular chaos is a valid description of the molecules in your kitchen, it’s because there is a low-entropy boundary condition in the past. It’s easy to “derive” the Second Law by making assumptions that aren’t true, even if the law itself is.

    Sorry to harp on you, but you are emphasizing exactly the mistakes that many people have been making for many decades, and we should have moved past them by now.

  • Tony

    Sean,

    Do you know of any good review papers on this topic?

    -Tony

  • http://www.fqxi.org/community Anthony A.

    Count Iblis:

    Yes, I think a model in which the entropy is a minimum at some ‘time’, then increases in both (coordinate) time directions away from this — so that observers see the AOT pointing away in both ‘directions’ — is a very interesting one. This is in fact part of the core of what Sean thinks (as I understand it; the other part being that the maximum possible entropy of the universe is infinite so that it can and does increase indefinitely without reaching equilibrium). For an extensive discussion of this idea and models that employ it, you my want to look at this review article that I just posted.

  • http://pantheon.yale.edu/~pwm22/ Peter Morgan

    efp:
    No, entropy is not Lorentz invariant, nor is temperature, to which it is the thermodynamic dual. What is Lorentz invariant, in quantum field theory, is the quantum vacuum; the question of whether there really are quantum fluctuations is problematic, but if we take it that there are quantum fluctuations as well as thermal fluctuations, we should be able to introduce a Lorentz invariant “quantum entropy” as a thermodynamic dual to Planck’s constant — which on the view I take in my (journal published, see my web-site) papers is a measure of quantum fluctuations, just as temperature is a measure of thermal fluctuations.
    If we introduce independent measures of Lorentz invariant and Lorentz non-invariant entropy, the ways in which they affect measurement of physical processes when both measures of entropy are non-zero are non-trivial.
    See also the second comment in my post 24 above, which did not lead to further discussion at the time.
    If you find references to a non-trivial Lorentz invariant definition of entropy, please let me know. I don’t know of any, and referees have not yet pointed out any either (though my papers have probably not yet talked about entropy explicitly enough to excite referee comments on the existing literature that I ought to have read).

  • Brett

    Sean-

    If Thomas’s argument is fallacious, then so is yours. Here’s why: Consider the specific state of our universe; what is its entropy? The answer is zero, because the entropy of any completely characterized state is zero. In this regard, our universe is exactly as likely as what you would call a “high entropy universe.” So our universe is no less likely than any other. You are identifying our particular universe as unlikely because it is part of a macroscopic ensemble that, when measurements are coarse-grained, has low entropy. But once you introduce coarse-graining, Thomas’ stability argument is absolutely correct. The coarse graining contains information about what basis you are using to characterize the entropy. It’s true that Thomas’ coarse graining presupposes that there is lower entropy in the past, but so does yours. The “correct” coarse graining to use is really determined by what uncertainties there are in our measurement procedures, and the existence of such procedures is crucially tied to the low entropy of the past.

    There is a deep question here about the difference between statistical/informational entropy and thermodynamic entropy. The subtle distinction between them has come to the fore recently in discussions of whether there is a fundamental upper bound on the entropy in terms of the viscosity for any substance. The answer to the question is, for the Boltzmann entropy, “no.” A system can have its entropy made arbitrarily high by adding new uncertainties in its composition. However, the statistical entropy derived from this is not the same as the thermodynamic entropy that Clausius would have used to characterize the system.

    I don’t think either Sean or Thomas is completely right or completely wrong, but both are prating too much about the elephant.

  • TimG

    Sean, thanks for your answers (post 21) to my questions (post 17). With regard to (1) and (2), I guess I would think that so long as “That’s just the way it is” is a possible answer, we’d need an experimental test to distinguish any model from the possiblity that there is no underlying explanation.

    That is, it’d be nice if we could show some evidence that that model was a better explanation than just saying that’s the way it is. Otherwise, why believe in the model? As you point out in your posts on religion, saying “If A is true it would explain B” is only a good argument for believing A if we have a reason to believe B should have an explanation.

    Of course, I suppose you have to find the model before you can figure out how to test it — or at least find some general features such a model should have that are testable.

  • John Merryman

    Claire,

    One good thing to look at is the physics of the brain with regard to the arrow of time. Are there are any time arrows in the brain?

    From one neophyte to another, the arrow of time for the brain is from past events to future ones, while the arrow of time for the mind, since it records these events, is from future potential to past circumstance. Think in terms of how fast what we write recedes into the past….

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Brett and thomas– I think I must not be making myself clear, because the claim I am making is so obviously true that I can’t believe anyone can both understand it and disagree with it. I am not claiming that the Second Law isn’t true, or that entropy doesn’t increase in our kitchens. I am not claiming that Boltzmann’s assumption of molecular chaos (no correlations between particle momenta) doesn’t allow you to derive the H-theorem. I am not claiming that the Boltzmann equation, or the entire apparatus of kinetic theory, doesn’t do a good job at explaining real physical phenomena.

    What I am claiming is that Boltzmann’s assumption of molecular chaos, which is used in the proof of the H-theorem, is not true in the real world. (Or similar statements concerning other attempts to “derive” the second law.) There certainly are correlations in particle momenta, and everyone agrees that there are — if there weren’t, the entropy would have been higher in the past. Which it wasn’t.

    Boltzmann’s arguments “work” (in the sense that you derive equations that seem to correctly predict the behavior of real gasses) because there is no special boundary condition in the future. But that doesn’t mean that his arguments are “right,” in the sense of providing the actual reason why entropy increases.

    Entropy increases because it was very low in the early universe. Molecular chaos is completely beside the point.

  • Brett

    Sean-

    I specifically said that you weren’t completely wrong. It’s absolutely true that entropy is increasing because it was low to start with. But your argument that boundary conditions with entropy decreasing are just as natural as those with entropy increasing doesn’t parse. Entropy is a entirely a product of coarse graining, and there is not just one possible coarse graining. Rather, how we coarse grain is a product of what measurements we can make–what information we can extract from a system.

    If we were Boltzmann brains, the very fact that we were extracting information would mean we would always coarse grain in a fashion to make it appear that entropy is increasing. Since we are not such ephemeral fluctuations, there really is a question of why entropy was so low to begin with–or why some coarse grainings are strongly preferred. But you can’t sweep the issue Thomas raises under the rug by claiming that your preferred coarse graining is natural while his preferred boundary initial condition is not.

  • http://skepticsplay.blogspot.com/ miller

    If all physical laws are time-reversible, then how can black holes have an event horizon, a point of no return?

  • Pingback: Sean’s experimental science in a space he can’t access « Society with Jimmy Crankn

  • http://www.gregegan.net/ Greg Egan

    Miller (#68):

    In the spacetime geometry of a black hole, at the event horizon the only timelike vectors that point radially outwards also point backwards in time (in the sense defined as “backwards” for the external universe). So to escape from a black hole, you either have to travel along a spacelike vector (i.e. travel faster than light), or you have to travel backwards in time — which doesn’t violate any physical laws, but is essentially impossible for thermodynamic reasons. (By “travel backwards in time”, I don’t mean jump in some magic machine and emerge in the past, I mean experience everything along your world line backwards, remembering what other people consider to be the future. This is physically possible in principle, but the environmental boundary conditions make it impossible in practice.)

    The equations of general relativity also permit a complete time-reversal of the black hole spacetime, known as a white hole, in which you would have no choice at the event horizon but to travel outwards. The reasons there are (very probably) only black holes rather than white holes in our universe are ultimately thermodynamic in nature, related to all the other aspects of the arrow of time discussed on this thread.

  • http://www.jessemazer.com Jesse M.

    Greg Egan wrote:
    The reasons there are (very probably) only black holes rather than white holes in our universe are ultimately thermodynamic in nature, related to all the other aspects of the arrow of time discussed on this thread.

    Is it guaranteed to be true that a “white hole” would have to look like the reverse of a black hole in all respects, including quantum phenomena like Hawking radiation? Obviously it must be possible to have such a completely time-reversed black hole just by T-symmetry, but I wonder if there are clear arguments in “white hole thermodynamics” that would rule out a different kind of white hole that was increasing the entropy of the region it was sitting in rather than decreasing it. If it was possible to have an entropy-increasing white hole then you’d need additional arguments to explain why we don’t see any, but if a white hole would require photons from its surroundings to converge on its event horizon as time-reversed Hawking radiation, then I suppose the absence of white holes could then be explained on thermodynamic grounds alone.

    Thinking along these lines, it’s interesting to consider the argument made by Neil B. in post #22 about “intervening” in a time-reversed world (to make this slightly less fantastical, consider a giant supercomputer simulation of a given universe in which we run it for a while, take some later state and then reverse the momenta of every particle, then evolve the simulation forward and see the arrow of time running backwards–what happens if you then perturb the simulation at some point during its evolution?) If we made such an intervention in the neighborhood of a white hole, then presumably the perturbation would cause the arrow of time outside the white hole to flip to the “normal” direction again, but what would happen to the white hole itself? Our perturbation couldn’t possibly have any effect on anything inside its event horizon, since nothing can enter a white hole event horizon, so everything inside the horizon would presumably carry on in its usual time-reversed fashion, and the white hole would continue to spit out matter rather than pull it in, yet outside the white hole we wouldn’t see time-reversed Hawking radiation. This would seem to be an argument in favor of the notion that you could have an entropy-increasing white hole, but obviously it’s not too rigorous so I’m not sure.

  • http://www.gregegan.net/ Greg Egan

    Jesse (#71):

    My original comment about white holes being improbable was meant on a purely classical level. One problem for a classical white hole (in our own particular universe) is explaining where it’s supposed to have come from — did it just appear as part of the Big Bang?

    I know almost no QFT, so my guesses about Hawking radiation and white holes could be way off the mark, but for what it’s worth my hunch is that the external universe’s arrow of time is part of the reason a black hole emits Hawking radiation, and that a white hole in our universe would actually emit thermal radiation of the same temperature. Temperature is unchanged by time-reversal, so a time-reversed black hole should yield a white hole of the same temperature — and unless we’re time-reversing the whole universe, there’s no reason for the white hole to be lowering the entropy of its surroundings.

    The Wikipedia article on White holes attributes an argument that sounds a bit like this to Hawking, going even further and suggesting that (once you account for quantum effects) “black holes and white holes are the same object”. I haven’t read what Hawking actually wrote, but this seems to be implying that the whole distinction between emitted Hawking radiation and infalling objects is a thermodynamic one, tied to the arrow of time in the external universe.

    Hopefully an expert who actually knows about this stuff will comment …

  • http://www.geocities.com/aletawcox/ Sam Cox

    Greg’s comments are very interesting.

    From a conceptual viewpoint, and, I beleive in fact, in, Wikipedia’s comment that “black holes and white holes are the same object” is completely correct.

    The key is the geometry of the system, and the coordiates of observation one happens to select within these coordinates. We measure particulate existence within 3 spaces, however, in an absolute sense, the observer feels himself/herself to be at the center of the geomety, and sees only “inward” and “outwad”, both at 360 degrees…from the extreme macroscopic to the sub-microscopic.

    What is a “Big Bang” at the astronomical antipode, becomes “photons” at the sub-microscopic antipode. Supermassive “Black Holes” at the macroscopic antipode become massed singular space at the quantum Planck Realm level of scale, but the entire universal system is interlinked and quasi-static…it is permanently existing.

    What we observe as “Time” is probably a general, extremely gravitationally time dilated proper time pulse of the universe. Hawking said: “The universe just is”. Einstein said: “Time is an Illusion”. I don’t completely agree with Einstein…I think Hawking said it a little better, because, time- and existence, even if they are “illusory”, are VERY real…not philosophical at all…unless freezing to death or dying in an airplane crash are “philosophical”. From a quantum perspective, the universes existence depends on its observation.

    A last comment on entropy: The application of photons…electromagnetic energy, to the biosphere of the Earth, has resulted in the development of organic, informational, complexity. Thus, we can observe the influence of submicroscopic white holes right here on Earth having a localized downward effect on entropy. However, in the part of the universe we observe, the general drift of thermal entropy is upward while informational entropy (complexity- both inorganic and organic) decreases.

    It is very important we NOT regard the universe as a “void”…devoid of complexity except at certain very limited coordinates. Particle groups, Baryonic diversity and proportion, as well as the behavioral characteristics of Baryonic matter are knds of informational complexity which uniformly pervade the universe from one side to the other…and make possible observational organic complexities very existence.

    A very interesting thread!

  • http://tyrannogenius.blogspot.com Neil B.

    Greg, Jesse, Sam, or anyone: What do you think of the thought experiments I put forth in #22? That sort of macroscopic what-if makes you think about the question, if time flow could even be reversible in principle, then how can we have definitely “been through” a real past? (Aside from how exactly we can know it.)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Sorry, I’ve been away from the internet for a while. Brett, I don’t know exactly what you mean by “the issue Thomas raises,” so I don’t know how to respond. Thomas seemed to object to the claim that the origin of the 2nd Law is to be found in low-entropy initial conditions rather than in a natural tendency of trajectories to increase in entropy, but you seem to agree with that, so I’m confused. There is no such natural tendency, since no matter what coarse-graining you choose, there is an equal number of trajectories that decrease their entropy and trajectories that increase their entropy. (Where the entropy of a state is defined by its macroscopic equivalence class under the coarse-graining, which is a perfectly sensible thing to do.)

    The issue of “who decides how we coarse-grain” is of course an interesting one, but I don’t think it’s directly relevant here. As a matter of practice, people do not choose weird coarse-grainings in which an ice cube melting in water decreases in entropy, although of course they could. In the coarse-graining that everyone actually uses, our early universe had a very low entropy — much lower than it needed to have, by any known criterion — and that’s a fact that needs to be explained by cosmology. I would personally bet that our notion of the “most useful” coarse-graining can be derived as a consequence of the Hamiltonian of the world, but I haven’t been following the research along those lines.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    About white holes: they are just the time-reversal of black holes, and the two are definitely not the same thing, since black holes are not symmetric under time reversal (even when we take Hawking radiation into account). It’s correct to say that the reasons we find black holes but not white holes are ultimately thermodynamic in origin.

    Think about a real astrophysical black hole. It is born and evolves by having matter dumped into it. Ultimately it evaporates away via Hawking radiation, a thermal bath with temperature inversely proportional to the hole’s mass. So a white hole would be formed from an *inward* flux of thermal radiation — you would see nothing if you looked at the forming white hole, but you would see thermal particles coming mysteriously from the outside universe in a spherically symmetric configuration. The radiation would start out high-temperature, and gradually cool. Then the white hole would start spitting out highly non-thermal matter. All along the entropy would be decreasing.

  • http://www.gregegan.net/ Greg Egan

    Neil

    In principle a universe might contain regions obeying different arrows of time, and still obey the same microscopic laws that we’re familiar with, but the bottom line is consistency: you can’t “change” anyone’s “past” if that really is their past, or you’re simply making contradictory claims about what happened at the relevant time and place. (Well, you could have a many-worlds structure that makes some kind of sense of that, but I’m talking classically.)

    I don’t know of any rigorous results on this, but I expect that regions obeying different arrows would necessarily be separated by borders that obeyed no arrow at all, and that people who were time-reversed with respect to each other couldn’t actually survive in each other’s environments. It’s fun to day-dream about scenarios where time-reversed people come into contact, and the kind of havoc that would play with their notion of free will … but like most time-travel scenarios, in reality you either have to “split” the universe and allow multiple histories, or simply accept that consistency rules and that crossing from one arrow to the other would most likely just be fatal. The one thing that’s certain is that a woman from Planet Clockwise couldn’t wander freely around Planet Anticlockwise like an actor blue-screened into a backwards-playing movie, watching eggs unscramble, while the locals witnessed her actions having the same comical effects. And even if you could find a physically possible history of the universe that looked like that, I suspect it would be incredibly rare and special among all universes with multiple arrows, most of which would instead have isolated pockets obeying their distinct arrows of time.

  • http://www.jessemazer.com Jesse M.

    Sean wrote:
    Think about a real astrophysical black hole. It is born and evolves by having matter dumped into it. Ultimately it evaporates away via Hawking radiation, a thermal bath with temperature inversely proportional to the hole’s mass. So a white hole would be formed from an *inward* flux of thermal radiation

    Would a white hole necessarily need to have such an inward flux of radiation? The original concept of a white hole was just a T-reversed black hole in a description based only on GR, right? So have there been any analyses of quantum field theory in the curved spacetime of a white hole to show that it would require such time-reversed Hawking radiation?

    As an analogy, you could in principle write a description of the orbits of planets in our solar system in GR terms, and the time-reversed version of this would also be a valid GR solution–but I think it would be completely permissible to have a solar system that looked like a time-reversed version of ours in terms of its GR description (all the moons and planets orbiting in opposite directions and so forth), yet which would have a “normal” arrow of time at the level of things like solar radiation. So I wonder if there are any rigorous physical arguments showing that you couldn’t have something that behaved like a time-reversed black hole in its own GR description (spitting matter and energy out of the event horizon, with nothing being able to enter) but which had a normal arrow of time in terms of Hawking radiation and other details.

    Along these lines, what do you think of the thought-experiment I suggested in post #71? In that post I imagined a giant supercomputer simulation of a black hole which is so detailed that it simulates every particle in its neighborhood, including all the photons of Hawking radiation (with the simulation’s rules perhaps based on some yet-undiscovered theory of quantum gravity), where we then take some later state of the simulation and reverse all the particle’s momenta as well as whatever else needs to be reversed in order to get a perfect time-reversed version of the original simulation’s run. This should result in a simulated white hole, but what if we now perturb the initial conditions of the simulation slightly, in a region outside the event horizon? Wouldn’t the perturbation eventually cause the arrow of time outside the hole to flip back to increasing-entropy (so that you would no longer see random photons from throughout space converging on the hole as time-reversed Hawking radiation), yet since the perturbation can’t affect anything inside the event horizon, wouldn’t the object continue to behave like a white hole, spitting matter and energy out rather than pulling it in? Shouldn’t this also be a valid solution to the equations of whatever fundamental theory is guiding the simulation?

  • Pingback: Cities and Towns of Vermont » Blog Archive » Comment on Arrow of Time FAQ by Peter Morgan

  • http://thinktoomuch.net/ lousirr

    My theory? The missing link: you, the observer. ;-) Or wait, that’s called the anthropic principle, right? I see no reason why I couldn’t be a Boltzmann Brain. Or you, for that matter. But not really both…

  • Jason Dick

    Jesse,

    Where else would the white hole come from? As Sean said, it’s a time-reversal of a black hole. Emission of Hawking Radiation is how black holes end, so absorption of the reverse of Hawking radiation would be how white holes begin.

  • http://www.geocities.com/aletawcox/ Sam Cox

    “Think about a real astrophysical black hole. It is born and evolves by having matter dumped into it. Ultimately it evaporates away via Hawking radiation, a thermal bath with temperature inversely proportional to the hole’s mass. So a white hole would be formed from an *inward* flux of thermal radiation — you would see nothing if you looked at the forming white hole, but you would see thermal particles coming mysteriously from the outside universe in a spherically symmetric configuration. The radiation would start out high-temperature, and gradually cool. Then the white hole would start spitting out highly non-thermal matter. All along the entropy would be decreasing.”

    There is a lot of thought in Seans posts, and it seems to me this is an excellent summary of the astrophysical process you are discussing.

    Whenever people discuss “time reversal” it makes me nervous because I think it is clear from field evidence the universe (we observe anyway!) has a single, single process time dimension.

    Although there is no “outside” to a GR universe, and such an observing frame of reference is not possible, the analogy of the merry go round is appropriate. Viewed from the side, people closer to us move in one diection, while people on the other side of the ride (really DO) move in the opposite direction…but there is no inverse process.

    It is kind of like an old fashioned 33RPM record where each time the record completes a 360 degree turn the needle finds itself in almost, but not quite the same location…hence the idea of the phylogenically developing quasi-static universe in which all infomation is inversely mapped and semi-permanent but subject to very gradual change.

    The Humpty Dumpty analogy is a good one. Humpty falls off the wall and since all the kings horses and all the kings men can’t put Humpty together again, we make an omlet! However when we feed the omlet which was “Humpty” to a chicken, it makes a perfect egg- just like Humpty Dumpty, with the same DNA and chemical structure…but just a few tiny differences. Since we couldn’t (in our universe anyway) compare the previous Humpty to his successor, it would be impossible to tell them apart. The egg, the chicken…all information continue perpetually even though the time process has an irreversable direction…

  • http://www.jessemazer.com Jesse M.

    Jason Dick wrote:
    Where else would the white hole come from? As Sean said, it’s a time-reversal of a black hole. Emission of Hawking Radiation is how black holes end, so absorption of the reverse of Hawking radiation would be how white holes begin.

    Well, a large black hole could also just be destroyed along with everything else in a Big Crunch, so shouldn’t it be possible in principle that a moderate-sized white hole would just have existed since the Big Bang? Also, the physics of the Planck scale probably isn’t well-enough understood to say exactly what happens to an evaporating black hole in its final moments, so presumably we also can’t say exactly how a planck-scale white hole might form…but once we have the smallest possible object that could still be called a white hole, then just as I’m not sure whether a macro-white hole would necessarily have to absorb time-reversed Hawking radiation or whether there might be other valid white hole solutions once you incorporate quantum effects into general relativity, I’m similarly not sure whether a micro-white-hole would require time-reversed Hawking radiation to make it grow, or whether there might be other ways it could grow (what if, instead of emitting normal matter and energy, it emitted exotic matter with negative energy? If you dump exotic matter into a black hole, does it grow or shrink?)

    In any case, my main question is about what is physically allowable behavior for an already-existing white hole, not how one would form in the first place. In GR you do have permanent black holes as an allowable solution, even if this is unrealistic in our universe. Of course GR alone does not include Hawking radiation which normally causes the black hole to have a finite lifetime, but I think if you confined a black hole to a finite mirrored box, there could be an “equilibrium” solution where the energy lost to Hawking radiation was balanced by the same radiation bouncing off the inside of the box and falling back into the black hole–I wonder, if one knew enough about quantum gravity to define the set of distinct “microstates” for this closed system, then if one picked a microstate randomly using a uniform probability distribution on the entire phase space, presumably you’d be equally likely to get a black-hole-at-equilibrium as a white-hole-at-equilibrium? Could you even distinguish the two at equilibrium?

  • http://www.geocities.com/aletawcox/ Sam Cox

    We usually assme that projecting a movie of Humpty Dumpty smashing on the kitchen floor in forward and reverse is a necessary indication of impossible time reversal with an equally impossible inverse process…but I’m not inclined to be overly quick in presuming that assumption is true…for reasons rooted in the Merry go Round analogy. The French have done a lot of work on the process of geometric inversion as it relates to a marginally closed geometry with a Schwarzschild metric in GR…and that work is very impressive.

  • http://www.geocities.com/aletawcox/ Sam Cox

    “Of course GR alone does not include Hawking radiation which normally causes the black hole to have a finite lifetime, but I think if you confined a black hole to a finite mirrored box, there could be an “equilibrium” solution where the energy lost to Hawking radiation was balanced by the same radiation bouncing off the inside of the box and falling back into the black hole–”

    Good thought! Hawking has done further work recently which indicates that there is no “information paradox”.

  • http://markdonahey.blogspot.com Mark Donahey

    Way to slip an allusion to Andrew Marvell in your explanation of Bolzmann’s and Schuetz’s suggestion. Nothing spices up science like a good literary reference.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I’m glad someone gets my literary allusions. One of them, anyway.

  • Pingback: it’s about time» Blog Archive » Time will tell…

  • http://www.sunclipse.org Blake Stacey

    Sean said,

    How we do the coarse-graining to define which microstates are macroscopically equivalent is a classic question. My personal belief is that the choices we make to divide the space of states up into “equivalent” subspaces are not arbitrary, but are actually determined by features of the laws of physics. (For example, the fact that interactions are local in space.) The project of actually turning that belief into a set of rigorous results is far from complete, as far as I know.

    If I were a smarter person, I’d probably spend at least a little time trying to apply category theory to this problem (see this post by John Armstrong). It’s not hard to imagine a first step:

    Take a classical harmonic oscillator. It goes round and round in phase space, trading off position for momentum and vice versa. Build a category by taking the points in phase space as your objects and time-evolution operations as your morphisms. Ellipses in phase space — curves of constant energy — then become isomorphism classes, because the oscillator motion is periodic, and for any A and B connected by a morphism, you can find another time evolution which takes B back into A. Per Shang-Keng Ma, an entropy can be defined as the logarithm of the phase-space volume explored by the system over a given timescale; the states relevant for thermodynamics (mumble mumble microcanonical mumble mumble) would be the decategorification of the states used at the statistical-mechanical level.

    Coarse-graining might be represented as a functor, or something like that, establishing some kind of equivalence which lets you have a weaker notion of isomorphism. Locality and whatnot would then become conditions on the functors you can construct.

    Why is it I only think about category theory really late at night?

  • http://www.sunclipse.org Blake Stacey

    Oh, and I’m proud to say that I recognized the Marvell allusion, too, although I had to get it second-hand, via Nicholas Meyer’s The Seven-Per-Cent Solution. Most of my “culture” is probably second- or third-hand, now that I think about it. . . .

  • Jason Dick

    Well, a large black hole could also just be destroyed along with everything else in a Big Crunch, so shouldn’t it be possible in principle that a moderate-sized white hole would just have existed since the Big Bang?

    But since a white hole is the time reversal of a black hole, its entropy is continually decreasing. Now, presumably one could be constructed, if somebody so desired, though that would require an obscenely specific knowledge of the physics of black holes, as well as obscenely accurate methods of producing the input to the white hole. Even this may be impossible, however, if quantum decoherence messes things up.

    But I can’t imagine how a white hole could form naturally in a universe where globally entropy is increasing. The probabilities from it forming through random processes are just obscene (though, granted, a Planck-scale black hole may well be as likely to form as a Planck-scale white hole through vacuum fluctuations).

    presumably you’d be equally likely to get a black-hole-at-equilibrium as a white-hole-at-equilibrium? Could you even distinguish the two at equilibrium?

    No. This is the entire point of Sean’s argument that you have to resort to specific initial conditions to have a region of the universe where there exists a definite arrow of time: at equilibrium there is none. Any system in equilibrium is invariant under time reversal, and thus a black hole in equilibrium would be indistinguishable from its time reversal, a white hole in equilibrium (say, in an anti-de Sitter universe with no other matter, if I’m remembering correctly that the horizon of anti-de Sitter space acts much like a “mirror” for radiation).

  • http://www.gregegan.net/ Greg Egan

    Jason wrote:

    But since a white hole is the time reversal of a black hole, its entropy is continually decreasing.

    I wonder if there’s a clear enough distinction being made here between two quite different scenarios:

    (1) You describe a universe containing a black hole, doing all the things a typical black hole does: being formed by a collapsing star, absorbing lots of incoming matter and gaining entropy, and then (eventually, over a very long time — assuming a cosmology such that the CMB becomes cooler than the black hole’s temperature) evaporating via Hawking radiation.

    You then time-reverse all of this together, and call it a universe with a white hole. But it’s not! Obviously if you time-reverse the whole universe, the cosmological arrow of time would be flipped along with everything else, and the “time reversal” would have no physical significance whatsoever. If we merely pretend that we’ve flipped the arrow of time while actually making no meaningful physical change, we just get a time-reversed description of our own universe with a black hole in it, which will obviously violate the Second Law and sound absurdly unlikely.

    (2) You describe the spacetime geometry of a black hole out to the point where spacetime becomes almost flat, and you time-reverse that region alone, without time-reversing the rest of the universe in which it is embedded. You can no longer make statements about the behaviour of the resulting white hole with regard to its environment merely by time-reversing the behaviour of a typical black hole, because you’ve changed the relationship between the black/white hole and its cosmological surroundings.

    For example, surely there is no compulsion for the white hole to emit low-entropy dust, or gas, or bits of companion stars, or even to undergo the reverse of a stellar collapse and disappear, just because that’s the time-reverse of what typical black holes do in collaboration with the rest of the universe. Nor, I think, should it be considered inevitable that a white hole could only be formed by an inverse of Hawking decay (admittedly it’s hard to account for its formation by any process at all, but like Jesse I’m still curious to know how a white hole might behave if we’re given one “for free” somehow, perhaps created in the Big Bang).

    And is it really true that Hawking radiation was derived without any reference to boundary conditions at infinity? I don’t know the answer to this (I’ve skimmed Hawking’s 1975 paper, “Particle Creation By Black Holes”, but I don’t have the background to follow it in detail), so I’m happy to be corrected — but if Hawking radiation actually relies on assumptions about the surrounding universe, then surely the white hole you get by flipping the black hole but not the surrounding universe need not be absorbing time-reversed Hawking radiation and violating the Second Law, it could instead be doing something much more sensible in the context of that surrounding universe.

  • http://www.jessemazer.com Jesse M.

    Jason Dick wrote:
    But since a white hole is the time reversal of a black hole, its entropy is continually decreasing.

    No, my whole argument is that you can’t simply assume that the only possible type of white hole is one that is a mirror of a normal black hole in every respect, including entropy, although obviously such a perfectly reversed black hole must be one physically allowable solution. It would be entirely consistent with T-symmetry if each of the following were allowed solutions to a theory incorporating both GR and quantum effects: black holes with increasing entropy, black holes with decreasing entropy, white holes with increasing entropy, and white holes with decreasing entropy.

    Think of my analogy of a solar system that is the gravitational time-reverse of our own from comment #78. Do you agree that all of the following are compatible with the laws of physics: a solar system with orbits just like ours and entropy increasing, a solar system with orbits just like ours and entropy decreasing, a solar system with orbits that look like the time-reverse of ours and entropy increasing, and finally a solar system with orbits that look like the time-reverse of ours and entropy decreasing? Isn’t it true that a description of a solar system using GR alone would ordinarily only deal with gravitational aspects of the solar system, not things like whether photons were streaming out of the sun or converging in on it, so that it would not distinguish between pairs of solar systems where all the orbits and bodies were identical but the thermodynamic arrow of time was different? (Obviously since all forms of energy curve spacetime you could incorporate solar radiation into a GR description of the solar system, but it’s such a minor contributer the curvature that I’m pretty sure this isn’t ordinarily done, just like pure GR descriptions of black holes ordinarily don’t bother computing the effects of Hawking radiation on the spacetime curvature.)

    Well, if the notion of white holes is based solely on time-reversing the GR solution that we call a black hole, then unless someone has actually calculated what quantum field theory predicts is going on near the horizon of the white hole spacetime as has been done with black holes, we can’t assume that the only possible solution is one where you have reverse Hawking radiation, although as I said before, T-symmetry does show that this must be one valid solution. I suppose if the original QFT analysis which showed Hawking radiation being emitted by a black hole was sufficient to prove that this was the only physically allowable thing that could go on near the horizon, that would show you must have reverse Hawking radiation near a white hole, but I doubt the physicists who were deriving Hawking radiation bothered to look for a QFT solution involving reversed Hawking radiation converging on the horizon of a black hole from outside, because probably the only way you could get this would be to impose a future low-entropy boundary condition which would seem highly unnatural in a realistic cosmological context.

    Also, I think my thought-experiment involving taking a time-reversed simulation of a black hole and then slightly perturbing it shows that it’s unlikely to be true that a white hole must have reversed Hawking radiation converging on it; since getting a simulation to have a reversed thermodynamic arrow requires such precise coordination among all the particles in your initial state, any small perturbation is likely to spoil it and give you a simulation where entropy is increasing as usual. But the perturbation can’t affect anything inside the horizon of the time-reversed black hole, so shouldn’t it continue to behave like a white hole even though on the outside it no longer has time-reversed Hawking radiation converging on the horizon?


    presumably you’d be equally likely to get a black-hole-at-equilibrium as a white-hole-at-equilibrium? Could you even distinguish the two at equilibrium?

    No. This is the entire point of Sean’s argument that you have to resort to specific initial conditions to have a region of the universe where there exists a definite arrow of time: at equilibrium there is none. Any system in equilibrium is invariant under time reversal, and thus a black hole in equilibrium would be indistinguishable from its time reversal, a white hole in equilibrium (say, in an anti-de Sitter universe with no other matter, if I’m remembering correctly that the horizon of anti-de Sitter space acts much like a “mirror” for radiation).

    But neither a black hole at equilibrium nor a white hole at equilibrium shows a thermodynamic arrow of time, and I thought the point of Sean’s argument was just to show that any arrow of time that’s a consequence of thermodynamics must depend on special initial conditions. A white hole and a black hole at equilibrium might potentially distinguishable in other ways, like the spacetime curvature–in pure GR there is no Hawking radiation so you can have a solution that looks like a stable black hole with nothing going in and nothing coming out, are you saying this spacetime is identical to a solution containing only a stable white hole with nothing coming out and nothing going in? I’d like to hear one of the resident GR experts on this site weigh in on this question…

  • http://www.gregegan.net/ Greg Egan

    Jesse,

    I don’t qualify as a “GR expert”, but I know this much: on the event horizon of a Schwarzschild (eternal, uncharged, non-rotating) black hole, one half of the interior of the light cone pokes out from the event horizon, and the other half leads into the interior of the hole. The half that pokes out also leads backwards in time, by convention. If you reverse that convention — flip the sign of the t-coordinate — you get a white hole.

    So a black hole and a white hole embedded in the same universe are certainly distinguishable, because you can compare the directions in time of the outgoing light cones, and notice that they are different.

    However, if you have a black hole sitting in a static universe which contains nothing at all (or if you want to account for quantum effects, fill the universe with a heat bath of photons that match the black hole’s temperature), and a white hole sitting in a separate static universe which also contains nothing (or the same kind of heat bath), then those two universes and their contents are physically identical, and it’s meaningless to say that one contains a black hole and the other a white hole. Unless I’m utterly confused, the “black” or “white” label simply describes the relationship between the light cones and some externally defined arrow of time; if there is no such arrow, the label becomes meaningless.

  • http://www.jessemazer.com Jesse M.

    Thanks Greg. So it sounds like it’s plausible that for a closed system in a finite volume with enough mass to form a black hole, if we had a theory of quantum gravity to give us the set of distinct “microstates” making up the phase space, there might be meaningful distinction between microstates whose macro-description would be something like “a black hole at thermal equilibrium with its surroundings” and microstates with the macro-description “a white hole at thermal equilibrium with its surroundings”. I remember someone mentioned earlier in the comments that the wikipedia “white holes” article said that Hawking considered white holes and black holes to be the same in certain circumstances, and now that I look at that article it seems that his argument was also based on considering the two at equilibrium:

    In quantum mechanics, the black hole emits Hawking radiation, and so can come to thermal equilibrium with a gas of radiation. Since a thermal equilbrium state is time reversal invariant, Hawking argued that the time reverse of a black hole in thermal equilibrium is again a black hole in thermal equilibrium.[1] This implies that black holes and white holes are the same object. The Hawking radiation from an ordinary black hole is then identified with the white hole emission. Hawking’s semi-classical argument is reproduced in a quantum mechanical AdS/CFT treatment[2], where a black hole in Anti De Sitter space is described by a thermal gas in a gauge theory, whose time reversal is the same as itself.

    Of course, this still wouldn’t address the question of whether, in a closed system out of equilbrium, it is theoretically possible to have either an entropy-increasing white hole or an entropy-decreasing black hole (presumably the entropy-decreasing black hole would be very unlikely in a closed system that lacked a low-entropy future boundary condition, just like any other spontaneous decrease in entropy which is permitted theoretically).

  • Robert Oerter

    As Brett pointed out, the entropy of a completely specified state is exactly zero. If we assume that there is such a thing as the “wavefunction of the universe” – something I personally have my doubts about, but most quantum cosmologists seem to take for granted – then the entropy of the universe is zero, always: in the early universe, now, and in the far future. So what’s the problem?

  • TimG

    Robert (#96), as I understand it entropy is a property of macrostates, not of microstates. So when we talk about the entropy of some state, we mean the log of the number of states in the macrostate that contains that microstate. (As discussed above, this makes entropy dependent on how we partition the set of microstates into macrostates.)

    So in that sense, even though the universe is presumably in one particular microstate, the entropy of the universe is only zero if that microstate is the only microstate in its macrostate — that is, if the microstate is distinguishable from all other microstates by its macroscopic properties.

  • TimG

    Regarding Brett’s comments above, I’m not so sure he and Thomas are really saying the same thing. Certainly, as Brett said, the entropy is dependent on the partitioning of microstates into macrostates, so the question of “Why was the entropy of the early universe so low?” can be rephrased as “Why is the preferred partitioning of microstates into macrostates one that makes the entropy of the early universe so low?”

    But Thomas seemed to be saying something else. He seemed to say the 2nd law is to be expected, and gave the example of a configuration of billiard balls with random trajectories increasing in entropy. I think this is just the argument that with more high-entropy states than low entropy-states, you’re statistically more likely to move to a high entropy state. However, you’d also be more likely to start in a state of high entropy.

    For simplicity, let’s pretend our system has only two macrostates, which we’ll call “low entropy” or “high entropy”. If we start in a random state and evolve for some time T, it seems there are four basic possibilities:
    (1) You start in a low entropy state (unlikely), and end up in another low entropy state (unlikely)
    (2) You start in a low entropy state (unlikely) and end up in a high entropy state (likely)
    (3) You start in a high entropy state (likely) and end up in a low entropy state (unlikely)
    (4) You start in a high entropy state (likely) and end up in a high entropy state (likely).

    (Here “likelihood” means the percentage of microstates in that macrostate which increase/decrease in entropy.)

    So this is consistent with the idea that there are equally many paths from low to high entropy (2) as from high to low entropy (3). Nevertheless, from any given initial state (whether low or high entropy), entropy decreasing (or staying at minimum) is less likely than entropy increasing (or staying at maximum).

    So in some sense both Thomas and Sean are correct. As per Thomas: For a given initial macrostate, we expect entropy to increase (or at least stay the same). As per Sean: For a randomly chosen initial microstate, increase and decrease are equally likely. The point is that we have a lot of states with a small “probability” of entropy decrease, and a few states with a large “probability” of entropy increase. (What I mean is there’s a small macrostate with a large fraction of its microstates increasing entropy, and there’s a large macrostate with a small fraction of its microstates decreasing entropy.)

    So if the microstate of the universe was chosen at random, we probably shouldn’t be surprised that it’s in a macrostate where over any particular choice of time T most microstates increase entropy. But before we knew our macrostate, we wouldn’t have expected our particular microstate to increase entropy over any given time T.

    So if by the Second Law of Thermodynamics we mean that entropy increase has a high (Bayesian) probability given that the universe is in some particular macrostate, then it’s not surprising. If by the Second Law of Thermodynamic, we mean that the entropy increase has a high a priori probability (i.e., for any microstate), then it is surprising.

    Either way, we should definitely be surprised that the initial state of the universe had such a low entropy, but I’m not sure anyone here is disputing this.

  • Pingback: The arrow of time FAQ « Later On

  • TimG

    Of course, that’s kind of off the top of my head. I’m not so sure my two-macrostate example really generalizes to many macrostates. Also maybe I’m blurring the line between “entropy increasing” and “entropy increasing or staying the same”. As you get higher up in entropy, you can’t really increase much, so even if less than 50% of the microstates in that macrostate decrease in entropy, maybe the expected entropy change is negative. In that case, I guess even for a given macrostate the second law really is a consequence of us being in a low entropy macrostate. (That is, if by the Second Law we mean that the expected change in entropy is non-negative, rather than the probability of entropy decrease is greater than 50%. These aren’t the same thing — one is the probability distribution for entropy change, one is entropy change weighted by that probability distribution.)

    That’s all assuming that there is a maximum entropy state of the universe. It seems to me that if there were an endless tower of higher entropy states, then every macrostate might have most of its states strictly increasing in entropy, despite there being a one-to-one corrsepondence between entropy-increasing microstates and entropy-decreasing microstates. Is there a maximum entropy state of the universe? I don’t have a clue — presumably, there are only so many configurations for the (fixed) amount of energy in the universe, but maybe not if the size of the universe isn’t fixed.

  • TimG

    Anyway, sorry to blab on and on. I tend to think outloud via messageboards at times. :)

  • Zitron

    Very nice discussion!

    I have what possibly is a very silly question, but still here it goes.

    How does the concept of “arrow of time” appear in general relativity? There is this idea that time in GR is just a coordinate, without objective meaning. In a different coordinate system, the arrow of time would become the “arrow of down”, so to speak, and maybe even lose its directionality. Is the arrow of time diffeomorphism-invariant?

    Apologies if it does not make much sense.

  • Jason Dick

    Greg,

    I wonder if there’s a clear enough distinction being made here between two quite different scenarios:

    (1) You describe a universe containing a black hole, doing all the things a typical black hole does: being formed by a collapsing star, absorbing lots of incoming matter and gaining entropy, and then (eventually, over a very long time — assuming a cosmology such that the CMB becomes cooler than the black hole’s temperature) evaporating via Hawking radiation.

    You then time-reverse all of this together, and call it a universe with a white hole. But it’s not! Obviously if you time-reverse the whole universe, the cosmological arrow of time would be flipped along with everything else, and the “time reversal” would have no physical significance whatsoever.

    Nah, just describe the time reversal of the black hole out to some thin shell just outside the black hole. Doesn’t change anything.

    Jesse,

    No, my whole argument is that you can’t simply assume that the only possible type of white hole is one that is a mirror of a normal black hole in every respect, including entropy, although obviously such a perfectly reversed black hole must be one physically allowable solution.

    But that’s what we mean when we say “white hole”: the time reversal of a black hole. The only time reversal that makes any sense is the thermodynamic arrow of time: all fundamental physical laws are symmetric under time reversal.

    But neither a black hole at equilibrium nor a white hole at equilibrium shows a thermodynamic arrow of time, and I thought the point of Sean’s argument was just to show that any arrow of time that’s a consequence of thermodynamics must depend on special initial conditions. A white hole and a black hole at equilibrium might potentially distinguishable in other ways, like the spacetime curvature–in pure GR there is no Hawking radiation so you can have a solution that looks like a stable black hole with nothing going in and nothing coming out, are you saying this spacetime is identical to a solution containing only a stable white hole with nothing coming out and nothing going in? I’d like to hear one of the resident GR experts on this site weigh in on this question…

    Well, pure GR isn’t completely accurate, so it’s not useful to use in such arguments. Our understanding of quantum mechanics indicates that the properties of the black hole (e.g. inability for light to escape) are fundamentally thermodynamic in nature.

  • http://www.jessemazer.com Jesse M.

    Jason wrote:
    But that’s what we mean when we say “white hole”: the time reversal of a black hole. The only time reversal that makes any sense is the thermodynamic arrow of time: all fundamental physical laws are symmetric under time reversal.

    Its seems like you’re still not addressing my point, namely: “It would be entirely consistent with T-symmetry if each of the following were allowed solutions to a theory incorporating both GR and quantum effects: black holes with increasing entropy, black holes with decreasing entropy, white holes with increasing entropy, and white holes with decreasing entropy.” What did you think of the analogy with the time-reversed solar system? Do you agree that a pure GR description of a stable black hole shows neither increasing nor decreasing entropy, and that we need to analyze quantum field theory on curved spacetime (or quantum gravity) to derive the prediction of Hawking radiation and increasing entropy? Do you agree that this derivation probably does not bother to check whether it’s physically possible to have a black hole decreasing in entropy, since this would likely require an unrealistic low-entropy future boundary condition on the radiation? Do you agree that if a decreasing-entropy black hole was physically possible (even if totally unrealistic in our universe), then by T-symmetry an increasing-entropy white hole would also be possible? (and if it was possible, we could no longer use pure thermodynamics to explain why we don’t see any, although there might be other good reasons such as there being no natural process that would create one).

  • Lawrence Crowell

    I am going to give my piece on this. For any phase space volume V the entropy is S = k log(V). Thermodynamics is nifty in that it saves a lot of hastle with those logarithms. This approach means that a coarse grained description with V and another with V’ will result in a small error due to the logarithm. The evolution of a system from one macrostate to another is statistically most likely to go from one macrostate to another with a larger volume. That we the change in entropy will be positive. This likely plays a role in quantum gravity and the origin of time, for quantum gravity states do not have a unique correlation to classical spacetime variables. Further, if a quantum wave function of a universe exists we are unable to assign a globally defined time variable to the entire superposition of metrics, or states whose configuration variables are these metrics.

    A standard view of quantum gravity is the path integral, which Hawking and other Euclideanize. In this perspective quantum gravity is obtained up to the one-loop correction as the result of some steepest descent method. Effectively the total path integral

    Psi[g, phi] = int DgDphi exp(iS[g, phi]),

    which is often Euclideanized by the Wick rotation i —> 1,is written under a steepest descent method as

    Psi[g, phi] =~ sum_{&g}int Dphi exp(I[g_0 + &g, phi] + iS’[g_0,phi])

    which sums over paths that have small deviations from the expected path g_0, eg the classical spacetime and the action is expanded into a real and imaginary parts. The real part is the instanton component that is a nabla S >> nabla I, reflecting the small quantum aspect of the spacetime — eg a WKB type of approximation. From this

    I[g_0 + &g,phi] = I[g, phi] – Gam(loop),

    where the loop part are the O(hbar) corrections from the &g content. The remainder of the spacetime content is on the tree level, which is essentially the classical spacetime. If we let &g = 0 then the above path integral is equal to that for a quantum field in any spacetime, and if we restrict our attention to flat spacetime this recovers the standard textbook quantum field theory.

    There are two problems that we have in all generality. The first is that not all quantum states over a metric space as a the configuration variable have a classical spacetime for that configuration variable. The other is that we have a counting problem. We really don’t know how to count states in general. This is why we are left doing saddle point integrations of the action around small quantum variations in a metric. We are stuck with effective theories. Can quantum states be counted in general? If we have a set of quantum states Psi(Y) over fields Y, then a process involving these fields is a time ordered product of these fields that enters into the path integral, acts on the vacuum state weighted by the measure term (the exponent of the action etc) to return a value. Can we define a time ordered product of states where the field or configuration variable is space or spacetime?

    I say no, and here is why. The Wheeler DeWitt equation Hpsi[g] = 0 may be converted to a Schrodinger equation if a harmonic oscillator term is introduced. Some work is done and the energy eigenvalues of this can provide a stationary phase e^{iEt} to the wave equation so it which converts the WD equation into the SE. Now this SE obtains for each eigen-wave function(al) psi_n[g], and the “time” involved pertains specifically to that space. So if we have a superposition of states or entanglements can we define a global “time?” The answer is no, for this implies a coordinate dependent map between metrics and general relativity is covariant, or as Wheeler loves to say is independent of coordinate descriptions. The operator i&/&t = iK_t is a Killing vector, which is unique to a spacetime, and is coordinate independent. So in trying to define a general time coordinate for all possible eigenstates we will commit a “crime” against general relativity.

    So is making an assumption that two metrics which are “close enough” to bad a crime to commit? Maybe not, and if we are careful it might be a “good” thing. Assume we make an assignment of a Killing vector we impose an error determined by the difference in the two metrics, &g = g’ – g (& = delta). The Einstein field equation R_{ab} – 1/2Rg_{ab} = -k T_{ab} in the trace reversed form is R_{ab} = k(T_{ab} – 1/2Tg_{ab}). For a source free spacetime, T_{ab} = 0 the Ricci curvature is zero. In a source free region R_{ab} = 1/2Rg_{ab}, but under the assignment of two metric in delta g assume a small violation of the Einstein field equations means a nonzero Ricci curvature is determined by a “potential,”

    R_{ab} = nabla_a nabla_bV =/= 0

    where the potential is a metric difference V = (g’ – g)_{ab}g^{ab}. (nabla = vector derivative etc) The perturbed vacuum Einstein field equation may then be written as

    nabla_a nabla_bV = 1/2 nabla^2V &g_{ab}

    or according to the difference in the metric

    nabla_a nabla_bdelta g = ½ &g_{ab}nabla^2&g.

    When contracted on indices and integrated over a region volume in M^4 we find that

    int_{vol} dv &g nabla^2 delta g = – int_{vol}dv nabla &g nabla &g = – int_{vol} dv (nabla g’ – nabla g)^2,

    which is the source of the energy error functional &E_g = |nabla g’ – nabla g|^2. This is a coarse graining over quantum gravity states.

    What this does is to start the process of coarse graining the quantum states of gravity. In doing so a set of states which are “close enough” to a classical spacetime g_c may be course grained around the state for g_c with the energy error function &E_g &T >= hbar/2. So let us assume that the universe is described by a set of states, and indeed a set of vacua, vacua which are not unitarily equivalent in the standard sense. The early universe was described by these states on a more fine grained level, or to use the macrostate analogy the states of the universe are sharp or have a high degree of “fidelity” or distinguishability and &E_g = 0 between all quantum states with a metric configuration variable. This means that &T —> infinity, or that time is so uncertain that …., well to put it bluntly there is no time. The process of a cosmology tunnelling out of the vacua is analogous I think to squeezed states or parametric amplification in quantum optics, which means that E_g becomes larger and &T becomes smaller and the cosmology has a coarser grained description over quantum states.

  • Jason dick

    Do you agree that if a decreasing-entropy black hole was physically possible (even if totally unrealistic in our universe), then by T-symmetry an increasing-entropy white hole would also be possible?

    Of course. The problem is that there’s no reason whatsoever to suspect that it is possible to produce a decreasing-entropy black hole.

  • Lawrence Crowell

    On equilibrium: Equilibrium is not the stable state in general relativity. It is not difficult to see why. Suppose we have a black hole of mass M which has an event horizon at radius r = 2GM/c^2 and an area A = 4pi r^2. The entropy of a black hole is S = (k/4)A, for k a multiplied set of constants. Now if the black hole is in equilibrium it would mean that the temperature of the even horizon is equal to the background, say the universe at large T = 2.7K. Such a black hole would have a mass about equal to the moon and the size of a pinhead or so. Now assume the black hole emits a quanta by the Hawking process so that its mass M —> M – &m, for &m

  • Jason Dick

    Lawrence,

    I think you meant to say that a black hole at equal temperature to its surroundings is not in equilibrium. Yeah, now that I think about it, that seems to be correct. However, though your post appears to have been cut off, you have to bear in mind that it is necessary to consider the surroundings when deciding whether or not this equilibrium is stable: if the Hawking radiation from the black hole heats up the surroundings enough, then it may remain in equilibrium.

  • Lawrence Crowell

    The post apparently got cut. To make it short the heat capacity of spacetime is negative. This means that contrary to standard thermodynamics high temperature low entropy and visa versa. So a black hole who’s horizon temperature is equal to the background temperature will not stay there. The quantum emission of a particle or the absorption of a particle means that the black hole will runaway in that direction. So with spacetime thermodynamics equilibrium really does not exist.

    Also as the universe expands the CMB temperature will over time approach absolute zero asymptotically. This is in the semi-classical sense, where their might turn out to be zeta function condesate-like partition functions for some tiny terminal temperature, but that is conjectural. Black holes will then eventually fall below the horizon temperature of the universe, due to Hawking-Gibbon radiation from the cosmological horizon at r = sqrt{3/ /}. Here / is the cosmological constant in the DeSitter spacetime, which the universe appears to approximately represent.

    Lawrence B. Crowell

  • http://www.jessemazer.com Jesse M.

    Jason wrote:
    Of course. The problem is that there’s no reason whatsoever to suspect that it is possible to produce a decreasing-entropy black hole.

    “Possible to produce” in the sense of it having a non-negligible possibility of occurring in our universe (which would not be true of any large decrease in entropy for a closed system), or “possible to produce” in the sense of it being an allowable solution to whatever the fundamental laws of physics are? My argument is just that it might be possible in the second sense–you haven’t offered any rigorous argument as to why we should be confident it wouldn’t be, and as I said, it’s unlikely that physicists were looking for such a solution when they calculated the behavior of Hawking radiation at the event horizon of a black hole (since such a solution would probably require imposing a strange low-entropy future boundary condition on this radiation).

    Think back to my thought-experiment about simulating a black hole in an enormous computer simulation whose rules were based on whatever fundamental laws govern a black hole and the particles around it (including Hawking radiation). If we take some later state of the simulation, and reverse all the particle momenta as well as whatever else needs to be reversed to flip the arrow of time, then if we now run the simulation forward from this state, do you agree we’d see a reverse-entropy white hole? But now what happens if we perturb the initial state of some particles outside the event horizon–won’t this perturbation likely cause the arrow of time outside the horizon to flip back to the forward direction? Are you asserting that merely flipping the thermodynamic arrow of time outside the horizon would be sufficient to turn the object from a white hole to a black hole, so that the horizon would no longer be impenetrable from the outside and matter could fall in? I’m not so sure that’s how it works, the object might continue to behave as a white hole but with the entropy around it increasing. (And obviously if this method will produce a simulated white hole with entropy increasing, then one could simply reverse the momenta etc. of a later state of this new run to get a new simulation containing a decreasing-entropy black hole).

    Actually, this suggest another question for Greg Egan, or anyone else here who’s knowledgeable about the mathematics of general relativity–in terms of pure GR, what’s the reason why it’s impossible to enter a white hole event horizon? Is it just that all the test particle trajectories happen to lead out, or is there some reason it would be impossible to add a new test particle trajectory leading in? (so that it would be impossible for an object that had previously been behaving as a white hole to suddenly start behaving as a black hole, as would be required for the perturbation in my thought-experiment to convert a white hole into a black hole) I know in the case of a black hole, if one wants to maintain a constant distance from the horizon (at a distance closer than it’s possible to orbit), then one must continually thrust away from the horizon–would the same be true for maintaining a constant distance from a white hole, or would it be reversed, so that one would have to continually thrust towards the horizon to keep from moving outwards? Does the white hole behave as a source of “anti-gravity”, in other words?

  • http://www.jessemazer.com Jesse M.

    Lawrence Crowell wrote:
    To make it short the heat capacity of spacetime is negative. This means that contrary to standard thermodynamics high temperature low entropy and visa versa. So a black hole who’s horizon temperature is equal to the background temperature will not stay there. The quantum emission of a particle or the absorption of a particle means that the black hole will runaway in that direction. So with spacetime thermodynamics equilibrium really does not exist.

    What about the situation of a black hole in a perfectly reflective box–wouldn’t all the photons radiated away as Hawking radiation fall back into the black hole, so an equilibrium would be reached? The abstract of this article seems to say you could have a stable thermal equilibrium in this case, for example, as does this page from the website of Piet Hut at the Institute for Advanced Study. And if you can have an equilibrium in a box, then whatever the temperature and other properties of the photons outside the horizon in the box, what is it that prevents you from duplicating this in an infinite universe and getting a solution where the black hole is at equilibrium with the radiation around it?

  • http://www.gregegan.net/ Greg Egan

    what’s the reason why it’s impossible to enter a white hole event horizon?

    To enter a white hole horizon, you’d have to be travelling faster than light — just as you’d have to be travelling faster than light to cross outwards through a black hole’s horizon.

    A horizon is a null surface, which means it’s generated by paths that light rays would follow. At any event in spacetime, you (or any massive test particle) can only be following a timelike worldline, which lies inside the light cone at that event. Every light cone comes in two pieces: one facing into the past, one facing into the future (with the definition being a matter either of convention or thermodynamics; there’s nothing in local spacetime geometry to tell you which is which).

    At a point on the event horizon of a black hole, the future-pointing half of the light cone also points entirely inwards, into the hole (except for a sliver that remains exactly on the horizon, i.e. the cone is tangent to the horizon). Equally, the past-pointing half of the light cone points entirely outwards (except for that single tangent line), away from the hole. So if you’re at the horizon, you must have got there from the exterior, and you must be heading into the interior.

    But as I said, there’s nothing in the local spacetime geometry to tell you which way is the future and which way is the past, so if we swap the roles of “future” and “past” in that description, we find that at the horizon of a white hole, you must have got there from the interior, and you must be heading into the exterior.

    It’s really only those future/past labels that distinguish a black hole from a white hole.

    Does the white hole behave as a source of “anti-gravity”, in other words?

    No, you’d still have to exert thrust away from a white hole to keep still, because accelerations are unchanged by time-reversal. And if you stop thrusting and let yourself fall towards a white hole, you would always continue moving towards its horizon, but you would never pass through, you would just get asymptotically closer and closer. This is the time reverse of what happens to things that escape from close to a black hole (under inertial motion, by virtue of having a large initial outwards radial velocity): they were never inside the horizon, but if they were arbitrarily close to the horizon and moving away from it with sufficient speed they will eventually escape — but it will take a very long time if they were very close to the horizon.

  • John Merryman

    Black holes are interesting constructs, but the primary real black holes are the vortexes at the center of galaxies. Much of the mass falling into galaxies gets radiated out prior to its falling into the vortex and it would seem that what does, is ejected as charged particles out the poles. This would seem to be half of a convective cycle of collapsing mass and expanding energy/radiation. Essentially the eye of a hurricane. Could there be another half, where this radiation cools down to the point where it starts to condense back out as mass? One prediction of this theory would be a stable quantity of radiation in space, similar to moisture in the atmosphere, with a clear cut-off level, similar to the dew-point. Say a cosmic background radiation, up to the level of 2.7K.
    Since gravity causes the metric of the dimensionality of space to contract, could radiation cause it to expand? Since there is no gravitational vortex around which this effect bends, it wouldn’t “curve” space, but it might manifest in other ways, such as red-shifting the spectrum of extremely distant light sources. This would be equivalent to a cosmological constant, which Einstein proposed to balance out the gravitational collapse. Surprisingly, this is what redshift appears to model, but since that isn’t accepted theory, dark energy has been proposed to fill in the very large blank.

    http://www.plasmacosmology.net/

  • http://www.jessemazer.com Jesse M.

    Greg Egan wrote:
    But as I said, there’s nothing in the local spacetime geometry to tell you which way is the future and which way is the past, so if we swap the roles of “future” and “past” in that description, we find that at the horizon of a white hole, you must have got there from the interior, and you must be heading into the exterior.

    It’s really only those future/past labels that distinguish a black hole from a white hole.

    If there’s no difference in the local spacetime geometry, is there anything to prevent a single object that behaves both ways at different times, or even simultaneously? i.e. at some time test particles are departing from the singularity and crossing the event horizon in the outward direction, at another time (or the same time) test particles are entering the horizon from outside and falling into the singularity? This gets back to the question I was asking in my comment #110 about whether, by flipping the arrow of time for matter outside a simulated white hole (by introducing a perturbation in the initial state of a simulated run that in its unperturbed version would look like a white hole), you would then see the white hole itself flip and start to behave more like a black hole, with matter able to fall in from the outside. If this is in fact possible, then it would lend support to the idea that the only difference between a black hole and a white hole is the direction of the thermodynamic arrow.

  • bob

    Hi Sean,
    It appears the same as Penrose’s argument in his road to reality. My question is who is the first guy to give the answers to faq. Another question is who do you think is the most respectful live guy besides yourself.:)
    Thanks.

  • http://www.gregegan.net/ Greg Egan

    Jesse M. wrote:

    If there’s no difference in the local spacetime geometry, is there anything to prevent a single object that behaves both ways at different times, or even simultaneously?

    As far as I can see, if you’re able to switch the thermodynamic/cosmological arrow of time in the external universe, you could have a single structure act as a black hole for some of the time and a white hole for some of the time. The hole carries with it an enduring distinguished direction in time: the direction in time that accompanies outwards passage through the horizon. If the thermodynamic arrow associated with that direction in the external universe changed, then you’d be entitled to call the hole by different names in the different epochs.

    The “even simultaneously” is trickier; I think that depends on exactly where and when and in what coordinates you ask the question. The singularity inside a hole is actually spacelike — i.e. extended in a spatial direction, not in time — like the Big Bang or Big Crunch. When you fall into a BH, you don’t arrive at the singularity like you’re reaching the centre of the Earth; what you see is the space around you getting crushed to a point in two directions while expanding in a third, until at a certain time everything is destroyed. Conversely, everything that leaves a WH singularity could be said to emerge from it at the same time, in at least one set of coordinates.

    Now the thing that’s really messy about white holes is that, at least under classical GR, you have no idea what’s going to emerge from the singularity. When a hole’s “acting as a black hole”, we’re happy to say that we can explain the states of matter at the singularity by knowing the history of what falls in. But when we’re treating the hole as a white hole, what is there to constrain the entropy, or internal thermodynamic arrows, of objects that the singularity spits out? It’s hard to see any clear resolution of that coming without quantum gravity. And it seems to be a bit of a cheat to say “The black holes we know about eat low entropy matter, whose entropy is increasing as it hits the singularity, therefore white holes will emit low entropy matter whose entropy is decreasing as it flies away from the singularity.”

    Jesse, I have a lot of trouble figuring out what would happen in your simulation with regard to Hawking radiation, because the treatments of Hawking radiation that I have to hand all do global calculations that follow waves all the way from “past null infinity” to “future null infinity”, and at some stages even invoke the collapse that forms the black hole. I’m not competent to answer the question “Is there some local process dictated by the spacetime geometry alone that determines what the Hawking radiation just outside the horizon must be doing, irrespective of all boundary conditions?”

  • http://www.geocities.com/aletawcox/ Sam Cox

    Zitron said: “How does the concept of “arrow of time” appear in general relativity? There is this idea that time in GR is just a coordinate, without objective meaning. In a different coordinate system, the arrow of time would become the “arrow of down”, so to speak, and maybe even lose its directionality. Is the arrow of time diffeomorphism-invariant?”

    The laws of physics, including General Relativity and significantly Quantum Mechanics, are almost invariably reversable. CP symmetry has been violated, but not for hadrons.

    There is no “arrow of time” in General Relativity. Time in GR is conventionally treated as a “space-like” dimension, so locations in space-time are geometric coordinates. There are a number of geometries which satisfy the GR equations, but the first one developed was that of Schwarzschild. Since GR is satisfied by sets of positive and negative solutions and because the Schwarzschild “mirror” geometry satisfies the complete batch, one could argue Schwarzschild geometry to be the most complete geometric reflection of the concept.

    The “arrow of time” is a thermodynamic issue…and a multiple-faceted issue at that, as can be seen by the divergence of point of view expressed on this thread.

    A universe without an information paradox is a universe where all information is conserved and preserved…everywhere. A careful evaluation of the significance of that discovery and of what has been learned about the universe over the past century suggests that the Schwarzschild metric may, in the final analysis support more of the data than any other of the possible GR universal geometries.

    Max Tegmark is well known for his theoretical treatment of the “multiverse concept”, yet Max makes it clear that there are certain known facts about the power specturm, and certain initial conditions necessary during the big bang which could demand a universe of finite mass and marginally closed space (as also depicted in the NASA “shape of space” diagram based on the WMAP results).

    Ned Wright makes it clear that the standard model requires a certain specific density at the big bang…for the universe to have developed as it is observed. The density formula does not admit infinite values. Without being too exhaustive, it is pretty obvious we live in a universe where matter, energy (and according to the recent findings about the lack of an information paradox), entropy in all its forms is likewise conserved in the universe at large.

    To understand the expanding and accelerating universe within the Schwarzschild geometry, it is only necessary to accept the idea that the negative sets of solutions to GR and the “second side of the universe” of Schwarzschilds mirror geometry may not be mathematically and geometrically vestigial…and to conceptualize appropriately.

    Gravity is a direct reflection of the momentum of GR and in toto gravity via black holes, prevents a universal heat death. For gravity to accomplish this act of conservation it is only necessary that the universe be found as a quantum entity, in GR scale and with a Schwarzschild “two-sphere” marginally closed geometry.

    I used two thermodynamic analogies on this thread. In one the fallen egg is made into an omlet, which is then eaten by a chicken and formulated into its original biological equivalent…with very minor differences. This analogy follows a continuously existing quasi-static universe where continuously existing complex information is the method of thermodynamically “re-assembling” the universe and everything in it perpetually.

    The second analogy was the merry go round, in which events are observed to pass in two different “arrows of time” or directions simultaneously…without the presence of any inverse process.

    However, filming and observing the dropping of an egg in forward and reverse also is worth some discussion. The fact that the egg is shattered does nothing to change the history of the egg…that a chicken made it, or that at one time it was a perfectly formed egg. In the film it seems inconguous to watch the egg come together and leap off the floor at 1G acceleration, but to insist that such behaviour would require an inverse thermodynamic process is unjustified, just as watching folks move in two directions on a merry go round would not indicate inverse process.

    This is not the place to discuss the effects of scale and complexity on our perception of an arrow of time. However, the fact that such things as scale and particulate complexity affect our perception of an “arrow of time” connects the process of observation itself with the kind of existence (universe) we perceive and our feeling that time does indeed have a single process.

    I smiled when someone remarked that as they sat in their chair writing, they possessed an “inertial” frame of reference. Sitting in a chair on the surface of the Earth is a non-inertial frame of reference- just as non-inertial a frame as a spacecraft accelerating at a constant 1G toward the stars. The way we confidently observe and interpret the universe is undoubtedly far from the way things really are.

    Einstein made a bold step when he asserted the equivalence of non-inertial and gravitational frames of reference. Just as people in the spacecraft would observe an outside universe filled with relativistic effects, we here on earth as we undergo a constant gravitational acceleration also observe a universe which is, from our frame of reference, a relativistic product of our accelerated frame of reference.

    In fact, we and the particles of which we are made, existing on 4D particulate event horizon surfaces as we do, are with the Earth itself, relativistic effects. Our sense of motion, change- and the “arrow of time” are only products of the way we observe the cosmos at our present coordinates.

  • John Merryman

    Sam,

    I used two thermodynamic analogies on this thread. In one the fallen egg is made into an omlet, which is then eaten by a chicken and formulated into its original biological equivalent…with very minor differences. This analogy follows a continuously existing quasi-static universe where continuously existing complex information is the method of thermodynamically “re-assembling” the universe and everything in it perpetually.

    The second analogy was the merry go round, in which events are observed to pass in two different “arrows of time” or directions simultaneously…without the presence of any inverse process.

    Consider that as gravity collapses mass, radiation expands energy. Mass is composed of energy, which is constantly breaking down old forms and creating new ones, so consider the analogy of a movie projector, where the energy of the projector light is constantly going from previous frames to succeeding ones, while these frames go from being in the future to being in the past. Just as the energy of sunlight goes from previous days to succeeding ones, as particular days go from being in the future to being in the past. Just as the process of life is going on to future generations, as it is shedding old ones, while the frames of these individual lives go from being in the future to being in the past.
    The arrow of time for process/energy, ie. the hands of the clock, is from past to future, while the events being recorded, the face of the clock, go from future to past. So the arrow of time for mass/form collapses, as the arrow of time for energy/process expands.

  • John Merryman

    Of course these two effects interact, so that open forms which absorb more energy then they lose are expanding, such as a growing child, or the warming morning and it is as they peak and start to lose energy that they contract.

  • Pingback: links for 2007-12-08 « Qulog 2.0

  • http://www.pipeline.com/~lenornst/index.html Len Ornstein

    I feel like the little boy who wonders why the emperor is nude.

    If one takes the uncertainty principle as a primitive and the unsolvability of the many-body problem as a given, it seems to me that microscopic irreversibility follows as the source of a quantum second law and the source of the arrow of time.

    So I googled (“microscopic irreversibility” and “uncertainty principle”) and got very few hits. One by Karl Gustafson, and another by Huaiyu Zhu, convinced me that the answer is not so simple…but nonetheless plausible.

    Any comments?

  • http://www.jessemazer.com Jesse M.

    Greg Egan wrote:
    The “even simultaneously” is trickier; I think that depends on exactly where and when and in what coordinates you ask the question. The singularity inside a hole is actually spacelike — i.e. extended in a spatial direction, not in time — like the Big Bang or Big Crunch. When you fall into a BH, you don’t arrive at the singularity like you’re reaching the centre of the Earth; what you see is the space around you getting crushed to a point in two directions while expanding in a third, until at a certain time everything is destroyed. Conversely, everything that leaves a WH singularity could be said to emerge from it at the same time, in at least one set of coordinates.

    Well, suppose we take the perspective of an external observer hovering at some short distance above the horizon. In “pure” classical GR terms, is it possible for him to see both a steady stream of test particles passing him as they fall into the horizon, and a steady stream of test particles passing him as they emerge out of it, with each stream individually looking to him just like what he might see if he were hovering outside a normal black hole or a normal white hole?

    If this is possible, it would be interesting to then consider what the outgoing stream would look like to someone riding along with one of the ingoing test particles, and vice versa. For example, for the observer outside the horizon, is there any finite time-interval T such that, if he labels the ingoing particle which is passing him at a particular moment “A”, and then labels the outgoing particle that is passing him T later “B”, that the worldlines of A and B would actually have crossed somewhere inside the horizon? If so, then if A and B both had clocks attached which appeared to be ticking forward at the moment each one was passing the external observer, then given the way the radial dimension becomes a time dimension once inside the horizon, would that mean A would have seen B’s clock ticking backwards at the moment their worldlines crossed? Also, if the external observer sees particles passing in discrete intervals rather than continuously–say, 1 ingoing particle per second and 1 outgoing particle per second–then would an ingoing particle pass an infinite or finite number of outgoing particles before it reached a) the horizon and b) the singularity? Obviously I’m not asking you to do any detailed calculations here, just wondering aloud (and I suppose the answer may be that the original idea of an external observer seeing a constant stream of both ingoing and outgoing particles is impossible for some reason).

    Jesse, I have a lot of trouble figuring out what would happen in your simulation with regard to Hawking radiation, because the treatments of Hawking radiation that I have to hand all do global calculations that follow waves all the way from “past null infinity” to “future null infinity”, and at some stages even invoke the collapse that forms the black hole. I’m not competent to answer the question “Is there some local process dictated by the spacetime geometry alone that determines what the Hawking radiation just outside the horizon must be doing, irrespective of all boundary conditions?”

    It may be outdated, but in this page on Hawking radiation by John Baez he says that no one has managed to figure out a local description:

    How does this work? Well, you’ll find Hawking radiation explained this way in a lot of “pop-science” treatments:

    Virtual particle pairs are constantly being created near the horizon of the black hole, as they are everywhere. Normally, they are created as a particle-antiparticle pair and they quickly annihilate each other. But near the horizon of a black hole, it’s possible for one to fall in before the annihilation can happen, in which case the other one escapes as Hawking radiation.

    In fact this argument also does not correspond in any clear way to the actual computation. Or at least I’ve never seen how the standard computation can be transmuted into one involving virtual particles sneaking over the horizon, and in the last talk I was at on this it was emphasized that nobody has ever worked out a “local” description of Hawking radiation in terms of stuff like this happening at the horizon. I’d gladly be corrected by any experts out there… Note: I wouldn’t be surprised if this heuristic picture turned out to be accurate, but I don’t see how you get that picture from the usual computation.

    On the other hand, Steve Carlip elaborates on the “heuristic” description Baez talked about above on this page, I’m not sure if this contradicts what Baez said about the lack of a local description of Hawking radiation or not.

  • http://www.imaginascience.com Newtoon

    I was happy to see that the explanation about Entropy did not use the “mess image”.

    The article about the Second Law in Wikipedia (your link and people will see that there are “sub-links” about Second law in Wikipedia because the stuff is not so obvious and exempt of debate) is quite good in English (yet a bit disorganised because of frequent changes) but quite old-fashioned.

    I tried to modify the French “Second law” article in Wikipedia with no avail.

    Once again, it is very important that people remember with your example of the egg and omelet that the Second Law does not say that an omelet is more “messy” than a egg but that it is more likely to make an omelet than an egg but to say then because the egg is more “ordered” would be WRONG.

    A video to illustrate (funny) : http://www.youtube.com/watch?v=CyySAAc_KNI

  • http://www.gregegan.net/ Greg Egan

    Jesse wrote:

    Well, suppose we take the perspective of an external observer hovering at some short distance above the horizon. In “pure” classical GR terms, is it possible for him to see both a steady stream of test particles passing him as they fall into the horizon, and a steady stream of test particles passing him as they emerge out of it, with each stream individually looking to him just like what he might see if he were hovering outside a normal black hole or a normal white hole?

    No, it’s not possible.

    To be precise about what I mean: suppose your observer is hovering above the horizon, and at some instant in time we declare that he is at spacetime event E. There is then a certain collection of possible worldlines that (a) intersect the hole’s singularity, (b) pass through E, and (c) escape to infinity. Now, there is nothing in the geometry itself that orders those three events along each worldline, but according to the observer’s own personal arrow of time, either all the worldlines will be coming from the singularity, or all of them will be heading into it. If he himself could see a mixture of cases, then nobody would ever have made the statement “Nothing can escape from a classical black hole”!

    However, if you’re allowing the universe at large to contain systems with contradictory arrows of time, then if the objects travelling past the observer are undergoing complex processes (rather than being featureless particles), then there’s nothing [except the logistical issues of keeping such systems isolated and running in their chosen direction] to prevent some of the objects from “thinking” that they’re falling in and others that they’re emerging. If their arrows of time are tied to the particular epoch when they are very far from the hole, certainly some of these objects passing through E can be far from the hole at very different times than others — that’s just a matter of choosing their velocities in such a way that the “earlier” ones can catch up with the “later” ones (imposing a single arrow of time on the description there for the sake of clarity).

    I’ll have to think a lot more about the Hawking radiation issue; I’ve read both pages you linked to (and the section in Wald that deals with Hawking radiation), but I still can’t figure out if the process really is independent of all assumptions about distant boundary conditions.

  • http://www.jessemazer.com Jesse M.

    Greg wrote:
    No, it’s not possible.

    To be precise about what I mean: suppose your observer is hovering above the horizon, and at some instant in time we declare that he is at spacetime event E. There is then a certain collection of possible worldlines that (a) intersect the hole’s singularity, (b) pass through E, and (c) escape to infinity. Now, there is nothing in the geometry itself that orders those three events along each worldline, but according to the observer’s own personal arrow of time, either all the worldlines will be coming from the singularity, or all of them will be heading into it. If he himself could see a mixture of cases, then nobody would ever have made the statement “Nothing can escape from a classical black hole”!

    I guess I was thinking that the scenario I described could still be loosely consistent with the “nothing can escape” statement since it might still be that nothing that fell in could ever escape, the only particles that could escape would be ones spit out directly from the singularity (the time-reversal of trajectories falling in). But if it’s not possible, then I’m confused about what it is that prevents an observer outside a white hole from entering the horizon. You said earlier that a ship wouldn’t experience anti-gravity outside a white hole, that to maintain a constant distance from the horizon the ship would still have to thrust outward just like with a black hole…so if an observer was hovering above the horizon of a white hole and turned off the thrust, what would happen?

  • http://www.gregegan.net/ Greg Egan

    Jesse wrote:

    so if an observer was hovering above the horizon of a white hole and turned off the thrust, what would happen?

    I answered that in the last paragraph of #112: you’d always be moving towards the horizon, but never catching up with it (not in your proper time, or by anyone else’s coordinates). Trying to cross a horizon the “wrong way” is like trying to catch up with a pulse of light — and if you remember that the horizon itself consists of potential world lines of photons this becomes a bit less mysterious. Whereas crossing a horizon the “right way” is like allowing yourself to be overtaken by a pulse of light — all too easy. (But as you probably know, in SR if you have a head start and you accelerate constantly, you can avoid being overtaken even by light. Similarly, by constantly accelerating — applying thrust — you can avoid falling into a black hole.)

  • http://www.jessemazer.com Jesse M.

    Greg wrote:
    I answered that in the last paragraph of #112: you’d always be moving towards the horizon, but never catching up with it (not in your proper time, or by anyone else’s coordinates). Trying to cross a horizon the “wrong way” is like trying to catch up with a pulse of light — and if you remember that the horizon itself consists of potential world lines of photons this becomes a bit less mysterious.

    Ah, that makes sense, thanks. So that would seem to imply that you can decide whether a given massive object is a black hole or a white hole without paying any attention to the thermodynamic arrow of stuff around it (aside from the arrow of your own brain and measuring equipment), just by seeing whether an object dropped into it reaches the horizon in finite proper time or not. For example, if you’re on a platform hovering above the horizon you could drop a probe over the edge which constantly sends back radio messages of its current clock reading, calculate what its final clock reading would be as it crossed the event horizon if the object were a black hole, and then if you continue to receive messages of greater clock readings than that you know the object must be a white hole…is that right? If so, that would suggest a possible way of making sense of the notion that an object could still be called a “white hole” even if the entropy around it were increasing rather than decreasing (which would also imply a way of making sense of the notion that an object could still be called a ‘black hole’ even if the entropy around it were decreasing).

    By the way, do you have the Misner-Thorne-Wheeler “Gravitation” handy? There’s a section there that seems related to the question of whether a single object can behave like a black hole and a white hole at different points in its history, which you had said something about in comment #116…on p. 826 they describe the construction of the Novikov coordinate system, and it seems to be based on considering a collection of particles which are emitted from the singularity and rise up out of the event horizon like a white hole, but then fall back downwards through the even horizon like a black hole, and with the condition that “Every particle in the swarm is ejected in such a manner that it arrives at the summit of its trajectory (r = rmax, tau =0) at one and the same value of the Schwarzschild coordinate time; namely, at t=0″. The coordinate system is constructed in such a way that each of these particles has a constant radial coordinate throughout its “cycloidal life”. So what I’m wondering is, does this mean that from the point of view of Schwarzschild coordinates, the object is behaving like a white hole from Schwarzschild time -infinity to 0, and then behaving like a black hole from Schwarzschild time 0 to +infinity? In #116 you said that an object could be a black hole for one segment of its life and then a white hole for the next segment, is this the same sort of thing?

  • http://www.gregegan.net/ Greg Egan

    Jesse,

    Your scheme where you drop a probe with a clock, and you monitor its signals reporting back its proper time does make sense to me as a way of distinguishing “black” holes from “white” — where these words take their meaning entirely by reference to the arrow of time that you (and also the probe) possess.

    The Novikov coordinates as described in MTW are actually leading into a separate issue, which is that the Schwarzschild solution for a perfect eternal classical black hole can be extended from what we’d normally think of as a black hole and its exterior into a larger solution that also includes a white hole and its exterior elsewhere. This is a kind of (non-traversable) wormhole known as the “Einstein-Rosen bridge”, that would “join” either two universes, or two parts of one universe. But (a) this is a mathematical idealisation that doesn’t apply to astrophysical black holes, (b) you could never travel through the “bridge” anyway, and (c) this is completely separate from the issue of considering the single exterior region around a hole undergoing a change in name because the people naming it at different times are subject to different thermodynamic arrows.

    MTW don’t explain any of this in the section on Novikov coordinates; you have to keep on reading through sections 31.5 and 31.6 before all of this is made clear. In particular, look at the curve that includes the points F, F’ and F” in figure 31.4(b) on page 835. If you follow this curve back in time prior to “the summit” at F, you’ll see that it actually has to cross t=-infinity before it can ascend from the horizon (let alone the singularity)! There’s a kind of symmetry in Novikov and Kruskal-Szekeres coordinates which is very beautiful, but a bit misleading, because this extended Schwarzschild geometry that they describe (if you take in the full range of their coordinates) is twice what would actually be there in reality.

  • http://www.geocities.com/aletawcox/ Sam Cox

    Appreciated your thoughts John…

    I have become more and more impressed by the common sense and significance of relativity and QM in explaining the world we observe.

    We know it well, but it is easy to forget that these concepts are, first and foremost, descriptive with extremely precise experimental veracity, in the case of GR for example, to more than 10 decimal places. That is easy to recite, but when we consider that the diameter of an atom is perhaps 10 to the minus 8th Cm, we can see how GR works right down to levels of scale where quantum effects dominate.

    I said in another thread that a recent practical test of GR in an airplane was accurate to “a few centimeters”. What I did not say was that that level of accuracy had nothing to do with the precision of GR…the accuracy problem was our guestimate of the distance between the antennae on the windows on each side of the cabin of the aircraft!

    GR accurately measures continental drift, the recession of the moon…with an accuracy of much less than one millimeter. The limits of measurement are essentially the limits of our instrumentation. This kind of accuracy is at the same time, awesome and profound.

    Gravitational time dilation for example, is not some esoteric idea but is at the heart of the way we observe the universe…a reason why we exist as we do. The grand proportion, the principles of binomial expansion, the observed speed of light, the behavior of the photon and the origins of particulation within observed scale are all interrelated concepts, and…as you point out, result in the way we observe the arrow of time.

  • http://www.gregegan.net/ Greg Egan

    Jesse

    I’m afraid I was completely wrong when I claimed that the proper time for an observer to fall to a white hole horizon was infinite. MTW section 25.5 makes it clear that the proper time to ascend from any r coordinate r1 to rest at a maximum r-coordinate R is exactly the same as the proper time to fall from R to r1. That those two times are the same for a black hole means they’ll also be the same for a white hole, and will involve finite proper times to cross between the singularity, the horizon, and the r-coordinate R outside the horizon.

    The twist is, particles that “ascend from the singularity” must ascend from a different singularity than the one particles fall into; also, ascending particles cross through a different horizon. (See Fig 31.4b of MTW page 835) I think that if you’re going to have an eternal classical black hole (eternal in both time directions), you really do have to consider the full Schwarzschild geometry, which inevitably contains a black hole / white hole pair. And that pair is time-symmetric!

    My statements about the light cones at the horizon were correct as far as they went, but when you have an eternal BH/WH pair like this, what happens when you time-reverse it is the white hole becomes a black hole, and an observer who falls from the exterior falls into what is now the black hole! So you never find yourself stuck outside a white hole unable to cross into the interior.

    Sorry for the confusion.

    Obviously the finite case will be different, but I’m not really clear as to what constitutes sensible formation and destruction events for white holes (unless we’re just going to time-reverse the normal processes of stellar collapse and Hawking decay for black holes, which would defeat our whole goal of figuring out what it means if you don’t time-reverse the whole universe along with the black hole). I’ll have to think about this some more.

  • John Merryman

    Sam,

    I’m certainly not arguing with the math, rather making the point that too much focus on the details does distort our understanding of the larger picture. I’ve been arguing that time is a consequence of motion, similar to temperature, rather then dimensional basis for it, like space. Consider a thermal medium, say a pot of hot water, with lots of water molecules moving about. If we were to determine a time keeping process out of this situation, we would take the motion of one of these points of reference and measure it against the medium it is moving through. The point is the hand and the medium is the face of the clock. Obviously all the other points are hands of their own clocks, but are medium/face for all other clocks. As Newton said, “For every action, there is an equal and opposite reaction.” So the motion of any point/hand is balanced by the reaction of the medium/face of the clock. To the hands of the clock, the face goes counterclockwise.
    Time is described as a dimension because it has direction from past events to future ones, but these events go from being future potential to past circumstance. Tomorrow becomes yesterday. In the thermodynamic medium, the relationships of these points constitute an event, even though the perspective is different for every point. While any and all of the points go from past events to future ones, the medium against which any point is being judged is the overall context, which once created, is displaced by the next, so this event goes from present to past. Mass is the face of the clock. As form it is information that goes from future potential to past circumstance. The energy is the hand of the clock, going on to the next unit of form and time, as it leaves the old.
    This collapsing wave of future potential turning into past circumstance, is distilled out as linear narrative. The quantum event, the bottle of poison, the cat, the box, our eyes. This linear progression is a stream of specific detail, like the path of a particular molecule traveling through the larger medium and the series of encounters involved. Yet there are innumerable other points of reference also describing their own narrative and all this activity exists in an equilibrium, so there are waves of all these other narratives crashing around as potential turns to actual and then is replaced, nothing really collapses to a point, just continues on its merry way, because every narrative amounts to the center of its own coordinate system, to which the circumstances determine the rate of change and there is no one dimension of time. The only absolute temperature is the complete absence of it and the same applies to time.
    While the math may be accurate down to the last decimal point, the real question is what is being measured. Time is a measure of motion, not the other way around.

  • http://www.gregegan.net/ Greg Egan

    Well, suppose we take the perspective of an external observer hovering at some short distance above the horizon. In “pure” classical GR terms, is it possible for him to see both a steady stream of test particles passing him as they fall into the horizon, and a steady stream of test particles passing him as they emerge out of it, with each stream individually looking to him just like what he might see if he were hovering outside a normal black hole or a normal white hole?

    The answer I gave to this previously (in #124) was incomplete. If the black hole has an infinite past, you do see particles that escaped in the past from the white hole half of the extended Schwarzschild geometry, at the same time as you see particles falling in, destined for the black hole horizon and singularity. Some of these particles will be the same, i.e. they go from the white hole singularity to the black hole singularity; others will go out to, or come in from, infinity.

    But if it’s an astrophysical black hole that formed some finite time ago from a collapse, then rather than seeing particles that left a white hole singularity (which is no longer part of the solution), you’ll see the massively red-shifted light that was emitted from the surface of the collapsing star just before it fell through the horizon. The luminosity of that surface emission drops exponentially with time, so in effect it very rapidly becomes the blackness of the black hole.

    It’s that case — where there’s only one horizon and one singularity, which must be seen by a given observer either as purely a black hole or purely a white hole according to the observer’s arrow of time — that I was describing in my answer in #124.

    And if you time-reverse a black hole formed by a collapsing star, then if you let yourself fall freely towards the white hole produced by the reversal, you never cross the horizon; rather, after a finite proper time, you collide with the time-reversed collapsing star emerging from the white hole.

  • http://www.pipeline.com/~lenornst/index.html Len Ornstein

    Since no one has yet commented on my post above (121), I assume that it may be due to unfamiliarity with the works of Gustafson or Zhu.

    Perhaps a quote from Zhu will help stir things up:

    Huaiyu Zhu, On the Physical Reality of Wave-Particle Duality

    “It will be shown that mathematical justification of entropy must always rely on a quantum assumption.

    Unfortunately, the standard quantum theory…was still reversible, save for the measurement process, so the paradoxes became even more acute: (1) The measurement process is irreversible, so it could not, even in principle, be described as part of the physical world. (2) Because of the uncertainty principle the second law could not be attributed to the lack of precision in measurements.

    The purpose of this letter is to explore the idea that the difficulties mentioned above may be overcome by a single postulate, that the random quantum jumps, hitherto confined to measurement alone, if admitted at all, are inherent in the physical world.”

  • http://www.geocities.com/aletawcox/ Sam Cox

    “(Unfortunately), the standard quantum theory…was still reversible, save for the measurement process, so the paradoxes became even more acute: (1) The measurement process is irreversible, so it could not, even in principle, be described as part of the physical world.”

    Len, the field work in this area I have studied indicates that at the quantum level of scale, the measurement process itself is reversible, not irreversible. We can observe and measure events to occur a certain way, change our minds and proceed to observe and measure them to happen differently, with a different measurable outcome.

    At the quantum level of scale, CPT symmetry is a fact of life, save for a few sub-atomic particles. Since these symmetry breaking particles nevertheless appear in predictable numbers, it can be presumed that even they emerge from a process inherent in the universal structure. The implication, of course is that even these “violations of symmetry”, since they repeat and are predictable, are part of an overall symmetrical system.

    Chirality is a given..it has to be a part of any universe where information exists and a arrow of time can be observed at some frame of reference. What seems a flat (or curved) and uniform surface when observed from one scale, as the Earth’s curvature observed from a distance, may, when observed from a less remote coordinate, be correctly observed to be quite asymmetrical…

    I’m not really sure there is a paradox here…

    I don’t personally understand why quantum reversibility is (unfortunate) either…it just is…it is behaviour we observe. Moreover this observed reality dovetails very nicely with the mathematical symmetry of most of the laws of physics…including GR and of course, Quantum Mechanics…

  • http://www.geocities.com/aletawcox/ Sam Cox

    “The purpose of this letter is to explore the idea that the difficulties mentioned above may be overcome by a single postulate, that the random quantum jumps, hitherto confined to measurement alone, if admitted at all, are inherent in the physical world.”

    Len, I think you make a very valid point…”are inherent in the physical world”.

  • Lawrence Crowell

    White holes! What are they? Egan (if I remember the name) is on the right track. The Penrose diagram for the Schwarzschild solution is a pentagon with an X crossing from the top to bottom corners. Try to draw it or look it up. The bottom and top horizonal edges are the singularity at r = 0, which curiously is a three dimensional space where the Weyl curvature diverges. There are two timelike regions, the squares on either side of the X and two tirangular wedges that are spacelike regions. The top one is the black hole, and the bottom is the white hole. Both the black hole and the while hole are “eternal” and the white hole is a source of stuff coming out and the black hole an absorber. If you can find a Finkelstein diagram for a black hole and turn it upside down you have the white hole.

    Is the white hole physical? No. The problem is that black holes are not eternal, they are formed by collapsing matter and this truncates the Penrose diagram so there is just one timelike wedge and only the black hole remains. Some time back I thought about using black holes, white holes and euclideanized gravity solutions as a model for instantons and excitons, but abandoned the effort. There was some talk back in the early 70s about white holes as being “creation fields” in the universe, which spew out material and cause the expansion of the universe. None have been found and the idea is no longer regarded as even theoretically stable.

  • http://www.pipeline.com/~lenornst/index.html Len Ornstein

    Sam:

    Please note:

    Both sections of your response were not to my words, but to part of the quoted excerpt from Zhu’s 8-page letter.

  • http://physicsmuse.wordpress.com/ Sandy

    The world is made up of overlapping relationships at multiple scales. Time is what we call changes in the configuration of these relationships. Acceleration also causes changes in relationship between things. I agree that time is a product of motion, specifically acceleration. Gravity and acceleration are both attributes associated with moving mass. So, maybe moving mass creates time and a temporal reference frame that is consistent for that system as long as that system exists. If so, initial conditions come into play, but not low entropy.

    Why do we attribute the arrow of time to entropy when we are surrounded by examples of entropy both increasing and decreasing? We don’t observe the future affecting the past. We see causality going in one direction (though relativity of reference frames gives me pause on that one). That is the view from here, now.

  • Paul Valletta

    On the issue of “Blackhole>

  • Paul Valletta

    Ok, lets see On the issue of “Blackhole-Whitehole” distinction, as sort of stated by Lawrence post 136, in which Penrose has a new idea of perceptive solution to the problem.

    If one was to derive a collapse of a Stellar object(star) of certian Mass, then a Blackhole remnant becomes the end product in GR. Now for Galactic Blackholes, the process is reversed, as observer paramiters are introduced into the solutions, I believe Smolin introduced this into one type of model. So how does one differentiate between Stellar collapse Blackholes and Galactic Blackoles, with the corresponding Whitehole solutions?

    Using cyclic models, the solutions intertwine at a “crunch”, and out of the solutions comes another Universe with amongst other factors, Times Arrow being instrumental in determining what WAS and what WILL be, that is what happened before the crunch, and what happened after the crunch.

    If one looks up into the night sky, where there are a lot of “White” holes visible, its just we call them Stars, these Stars/StellarWhiteholes, will emerge out of this universe (when seen by following observers in the “next” universe), as the primordial Blackholes in Smolins model. The time reversal ONLY occurs within the paramiters of close to a crunch/bounce.

    You can derive stellar collapse blackholes, if you could physically reverse the process within our Universe as a single isolated system, then the Star would re-emerge from the collapsed blackhole. But the laws of thermodynamics do not allow this, except at a Universal critical “end” phase.

    On the “other” side of a bounce,(which can only be retraced by observers WITHIN that cosmic horizon) what were Galactic blackholes appear to be spitting out vast quantities of whitholes, or Stars.

    So Penrose’s idea basically, in it’s simplest form works thus:Our local stars, are “Whiteholes” to any previous or post observers in any “other” Universe. Our Galactic blackholes, are Time’s Arrow starting points of a Previous singular point, wherby White holes become Blackholes, and Blackhole become Whiteholes. Entropy, of a previous Universe cannot but influence, with absolute precise form and function, as the remnant energy of our Universe disperses and wanes towards a crunch/bounce, there is less particles available, so there are very little colliding events, Time as we now it appears to be settling down into a process of absolute “order” rather than chaos.

    Out of “this>

  • http://www.jessemazer.com Jesse M.

    Greg, thanks for the clarification, this discussion and that section of the MTW book are giving me a better understanding of Schwarzschild black holes. I think I can follow the basics of what’s going on in the Kruskal-Szekeres diagram on p. 834, from this it seems like my earlier suggestion of an observer hovering at a fixed radius above the horizon and seeing a constant stream of both ingoing and outgoing particles would be possible, so as seen from the outside the object is more like a “gray hole”, neither purely black nor purely white. But I erred in imagining that the worldlines of ingoing and outgoing test particles would ever cross inside the horizon–in fact, each ingoing test particle that passes the observer will subsequently cross the worldlines of all the infinite number of outgoing particles that the fixed-distance observer will receive after that moment, before the ingoing particle reaches the horizon (in finite proper time, of course). Likewise, after crossing the horizon each outgoing test particle crosses the worldline of every ingoing particle that has passed the fixed-distance observer up until the moment the outgoing particle reaches him.

    As you said, the Kruskal-Szekeres diagram shows that the region of spacetime that the ingoing particles find themselves in after crossing the horizon is different from the region of spacetime that the outgoing particles came from before crossing the horizon, so I guess it is only in these interior regions that it really makes sense to talk about a “white hole” and a “black hole” as fundamentally different objects. Or at least they’re potentially different–MTW mention on p. 840 that in principle one could identify these two distinct regions on the diagram with one another, but they mention two objections, one involving a “conical singularity” where there is no local Lorentz frame, and one involving causality violations where observers can meet themselves going backward in time. But another physicist, Andrew Hamilton, also mentions the possibility of identifying the regions in this way here, and argues that the conical singularity is not really a reason to reject this possibility out-of-hand, and also says the objection about causality violations is incorrect, I think because any time an “ingoing” object interacts with an “outgoing” one, it will only influence the part of the “outgoing” object’s worldline that is closer to the singularity than the interaction-event. Unless we assume that the matter coming out of the black hole is at maximum entropy, though, it seems to me there could still be weird issues of objects with different thermodynamic arrows of time meeting inside the horizon, or of outgoing objects having to flip their arrow of time at the moment they cross the horizon.

    I’m still pretty much in the dark about the ingoing vs. outgoing issue in the case of non-Schwarzschild black holes that don’t exist forever. From the Kruskal-Szekeres diagram of a collapsing star on p. 848, I think I see what you meant in comment #132 about the only outgoing light being light emitted by particles in the collapsing star before they crossed the event horizon; there’s only one event horizon here, and it lies along the diagonal r = 2M, t = + infinity coordinate line, so the only way for a diagonal light ray to cross the horizon is for it to be parallel to the other diagonal r = 2M, t = -infinity coordinate line, which will make it an “ingoing” ray. But I wonder if things would change if you took into account Hawking radiation which allows the horizon to shrink after the star has collapsed. In terms of the diagram, it seems like a shrinking horizon would be represented by a line closer to vertical than the r = 2M, t = +infinity coordinate line, in which case it might be possible to have “outgoing” photon worldlines parallel to this t = + infinity coordinate line which crossed the shrinking event horizon (though these photons would not actually emerge from the singularity, if I’m picturing it right…maybe they’d be able to enter the horizon from ‘region III’ and leave it in ‘region I’ without ever running into the singularity?) But then again, I’m not sure if it even makes sense to have a Kruskal-Szekeres diagram in which you don’t have an event horizon lying along the diagonal coordinate line, or whether light beams would necessarily still be diagonals in this case (could you describe an ordinary flat Minkowski spacetime in terms of Kruskal-Szekeres coordinates, and if so would all light beams still be diagonals?)

    It might also be interesting to consider the hypothetical case of a perfectly time-symmetric black/white hole with a finite lifetime, which initially forms from a collapsing shell of matter (or converging time-reversed Hawking radiation), lasts for some time, then blows apart in a time-reversed version of its formation. I’m not sure if this is physically allowable in general relativity, although it must at least be allowable to have a white hole which has lasted from t = – infinity but then blows apart in the time-reversed version of a normal black hole’s formation. If it is possible to have a time-symmetric black-white hole with a finite lifetime, then I wonder if it could have both outgoing and ingoing photons crossing the horizon, and whether it would potentially have two distinct inner regions like a Schwarzschild black hole (and if so, maybe you’d need a different set of coordinates than the Kruskal-Szekeres ones to make this clear, just as the Schwarzschild coordinates don’t really work for depicting the two inner regions of a Schwarzschild black hole).

  • http://www.geocities.com/aletawcox/ Sam Cox

    Len, I agree that quantum fluctuations are inherent to the sub-microscopic universe, but I’m not sure a single postulate explains such behavior- or that such a postulate is neccesary anyway. Sometime when I have a chance, I’ll look it over…I don’t like to pre-judge something I have not studied, but those are first impressions based on the content of what you posted…Sam

  • John Merryman

    Sandy,

    It is a pleasant surprise to see someone else questioning whether time is fundamental. The institutional effect on science makes it acceptable to project established theory in the most fantastical, convoluted and complex forms imaginable, but fresh insights based on basic observation are too pedestrian to consider.
    The discussion of black and white holes is a good example; Gravitation contracts. Radiation expands. Everything else is detail, perspective, or some combination thereof and when the two columns are added up and the loose ends tied together, there won’t be any need for all the supernatural phenomena currently proposed, from extra universes and additional meta-dimensions, to Big Bang theory and its various patches, from Inflation to Dark energy.

    To those whom this may offend, it is another attempt to crack the facade and start a discussion. Surely I’m too stupid to be right and with all the intelligent people in this conversation, someone should have the wherewithal to set me straight. I may be too thick to understand, but it would be good test of communication skills.

  • http://www.gregegan.net/ Greg Egan

    Jesse

    I don’t want to comment further on this until I’ve done some more reading, but if you want to see a Penrose diagram (aka conformal diagram) of an evaporating black hole, there’s one on page 413 of Wald’s General Relativity. (BTW, in the Wikipedia article I just linked to, they do actually call the infinite Schwarzschild geometry a “grey hole”.) Penrose diagrams are a great way of keeping track of causal relationships, and if you know when light signals can get from one event in spacetime to another, you also know that any material particles around will have to travel between the two sides of the (two-dimensional version of the) light cones.

  • http://quantumnonsense.blogspot.com/ Qubit

    The arrow of time could be like a coil spring, a coil spring design allows for a closed loops that start and end in the same place, (but these closed loops have to be observed on one side). If there is a closed loop in the centre of the spring, then it could prevent the universe from changing direction but also allows for the possibility of time travel (but in a rather frightening way). I think the arrow of time entirely depends on the ability of a observer, to deal with the vast amount of information that’s needed to produce a closed loop half way through a universe.

    Qubit

  • http://physicsmuse.wordpress.com/ Sandy

    The future can affect the present where there is consciousness (free will) or any plan or algorithm working towards a predetermined goal (a program). Only the past is out of reach. What attribute(s) does the past have that the present and future don’t? One thing is a lack of uncertainty.

    There is an arrow of time from the past to the present and future, but there is also an arrow of time from the future to the present, with the past walled off.

  • http://www.jessemazer.com Jesse M.

    Thanks again Greg, I don’t own Wald’s book but I’ll check out a copy and take a look at that Penrose diagram. As I did some more thinking about this issue, I found it also helped to picture what was going on in terms of the “ingoing Eddington-Finkelstein” coordinates on p. 828-829 of MTW, where ingoing light rays are always represented as diagonals, but outgoing rays can be curved. The outgoing light rays from the center of a collapsing star immediately before it crossed the event horizon shown in the diagram labelled “Eddington-Finkelstein spacetime diagram of the collapsing sphere” on this page; looking at the diagram, I can more easily see what you meant in #132 about a distant observer forever seeing outgoing rays from the moments before the collapsing star crossed the event horizon, in the case of a black hole which lasts forever after the collapse (realistically the observer wouldn’t actually be able to detect them after a while because they’d be too redshifted and anyway light is emitted in discrete photons rather than continuously, but I’m really just talking about what geodesics would represent the past light cone of events on the distant observer’s worldline). For a black hole which subsequently evaporates, I think it would be a modified version of this diagram where the outside observer sees outgoing rays from the moments before the star crosses the horizon for a long time, then suddenly sees light from events at the R=0 coordinate immediately after the black hole finally evaporated completely. I found a paper, The Internal Geometry of an Evaporating Black Hole, which at the very end has a caption for fig. 3 describing the outgoing light geodesics for an evaporating black hole in advanced/ingoing Eddington-Finkelstein coordinates (the diagrams have to be downloaded separately from here), which shows that an outside observer would continue to see light that had been very close to the horizon for a long time until the final evaporation.

    I assume, then, that if we turn this sort of diagram upside down, it shows what would happen to the ingoing light rays in the case of a white hole which formed via time-reversed Hawking radiation and then later blew apart as a time-reversed collapsing star, as seen in the outgoing or ‘retarded’ Eddington-Finkelstein coordinates described on pp. 829-831 of MTW (where it’s the outgoing light rays that are always represented as diagonals). So, this also helps me see what you meant at the end of #132 when you said “if you let yourself fall freely towards the white hole produced by the reversal, you never cross the horizon; rather, after a finite proper time, you collide with the time-reversed collapsing star emerging from the white hole.” This also suggests a sense in which a black hole could be differentiated from a white hole regardless of the arrow of time of matter outside–even if we had a black hole that formed via converging light that looked just like time-reversed Hawking radiation, so that its formation and growth appeared symmetrical with its shrinking and evaporation in the ingoing Eddington-Finkelstein coordinates (including a reversal of the thermodynamic arrow outside the hole at its moment of maximum size), it could still be differentiated from its own time-reversed white hole because a freefalling observer on the outside could cross the black hole’s event horizon in a finite time, while for the white hole they’d be flung forward in time on a path that hugged the outside of the event horizon until the white hole had evaporated under them (probably in an amount of proper time comparable to the proper time it took the first observer to fall into the event horizon of the black hole from the same distance, although I’m not sure about that).

    The only thing that’s still a little confusing to me is what the black hole would look like in outgoing/retarded Eddington-Finkelstein coordinates, or what the white hole would look like in ingoing/advanced Eddington-Finkelstein coordinates. It almost seems as though in these cases the holes would have to form and evaporate in zero coordinate time in order for the distant observer’s light cones to come out right. I think maybe what was confusing me before was the thought that if a black hole’s formation and growth were symmetrical with its shrinking and evaporation as plotted in the ingoing/advanced coordinate system, then the drawing of its event horizon would look exactly the same in the outgoing/retarded coordinate system, but I suppose there’s no reason to expect that should have to be true. At some point I need to either study a GR textbook on my own or go to graduate school, so I can figure out how to do these sorts of plots myself…

  • John Merryman

    Sandy,

    There is an arrow of time from the past to the present and future, but there is also an arrow of time from the future to the present, with the past walled off.

    But the present becomes the past(and at an ever increasing rate, the older you get), so that wall is being breached continuously.

  • http://tyrannogenius.blogspot.com Neil B.

    Hey, anyone remember about the paper by Einstein and Tolman (?) supposedly saying that the past was not definite due to quantum info issues? I don’t mean, “merely” that we can’t find out all details about it. I mean, literally indistinct despite our observing specific things happening now, etc. I don’t think it was a MW type thing. Is that what most workers think?

  • http://aeolist.wordpress.com Ponder Stibbons

    I haven’t had time to read through all the comments, so apologies if someone has mentioned this already. But I wouldn’t glibly collapse our lack of memory of the future, our conceptions of cause and effect, and the second law of thermodynamics into the same arrow of time. There is no generally accepted argument for why any of those should cause the others, or why they should all share a common cause. For one, it seems obvious that our psychological arrow of time still applies to observed events that involve systems without a well-defined thermodynamic entropy (which, in fact, includes most systems), suggesting that the psychological arrow of time cannot be explained by the thermodynamic arrow.

  • http://aeolist.wordpress.com Ponder Stibbons

    A contrary view worth mentioning, I think, is John Earman’s argument that there is as yet no good reason to accept cosmological arguments for a low entropy past.

  • Pingback: It’s Over… « QED

  • http://physicsmuse.wordpress.com/ Sandy

    John,
    That the present becomes the past is not the same as being able to affect the current past while in the current present….But it was fun to think about the duration of the present. I realized that music, which is an art of time not space, gives me my best shot at comprehending (feeling) the present as a moving target. Instead of an arrow from past to future, there is the present in the middle with an arrow going to the past and an arrow going to the future. The present is moving both ways.

  • http://physicsmuse.wordpress.com/ Sandy

    I like to think about the difference between things and events. This is a difficult exercise because although they are clearly different, they are both configurations of matter/energy. Perhaps it is just rhetorical – an event is merely defined by its time element (something happened) instead of its space element (something is). Perceiving an event is us catching the universe in the act of reconfiguring (change). The wearing away of rock, is that an event? Basically it is. So our concept of an event is just us noticing change – consciousness (an event) tracking change (an event. Our brains are an apparatus for the perception of time. Time may or may not exist as a constant background, but the thing that we call time (change) is an important thing to track. Our senses alone could not track change, our brains do by using memory.

    Can someone tell me what time does in equations describing entanglement?

  • Pingback: The Overhyped Cosmological Arrow of Time « The truth makes me fret.

  • Pingback: The Lopsided Universe | Cosmic Variance

  • Pingback: It’s about time…. « Shores of the Dirac Sea

  • Pingback: A Brief Walk Down Stoney Street | Screaming Planet

  • Pingback: What if Time Really Exists? | Cosmic Variance | Discover Magazine

  • Pingback: La Nature du Temps « Dr. Goulu

  • Pingback: Have a Thermodynamically Consistent Christmas | Cosmic Variance | Discover Magazine

  • Pingback: Recordações do Futuro. Por que não? « Comentários, Críticas, Dicas etc.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »