The Black Hole War

By Sean Carroll | July 28, 2008 12:19 am

Lenny Susskind has a new book out: The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. At first I was horrified by the title, but upon further reflection it’s grown on me quite a bit.

Some of you may know Susskind as a famous particle theorist, one of the early pioneers of string theory. Others may know his previous book: The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. (Others may never have heard of him, although I’m sure Lenny doesn’t want to hear that.) I had mixed feelings about the first book; for one thing, I thought it was a mistake to put “Intelligent Design” there in the title, even if it were to be dubbed an “Illusion.” So when the Wall Street Journal asked me to review it, I was a little hesitant; I have enormous respect for Susskind as a physicist, but if I ended up not liking the book I would have to be honest about it. Still, I hadn’t ever written anything for the WSJ, and how often does one get the chance to stomp about in the corridors of capitalism like that?

The good news is that I liked the book a great deal, as the review shows. I won’t reprint the thing here, as you are all well-trained when it comes to clicking on links. But let me mention just a few words about information conservation and loss, which is the theme of the book. (See Backreaction for another account.)

It’s all really Isaac Newton’s fault, although people like Galileo and Laplace deserve some of the credit. The idea is straightforward: evolution through time, as described by the laws of physics, is simply a matter of re-arranging a fixed amount of information in different ways. The information itself is neither created nor destroyed. Put another way: to specify the state of the world requires a certain amount of data, for example the positions and velocities of each and every particle. According to classical mechanics, from that data (the “information”) and the laws of physics, we can reliably predict the precise state of the universe at every moment in the future — and retrodict the prior states of the universe at every moment in the past. Put yet another way, here is Thomasina Coverley in Tom Stoppard’s Arcadia:

If you could stop every atom in its position and direction, and if your mind could comprehend all the actions thus suspended, then if you were really, really good at algebra you could write the formula for all the future; and although nobody can be so clever as to do it, the formula must exist just as if one could.

This is the Clockwork Universe, and it is far from an obvious idea. Pre-Newton, in fact, it would have seemed crazy. In Aristotelian mechanics, if a moving object is not subject to a continuous impulse, it will eventually come to rest. So if we find an object at rest, we have no way of knowing whether until recently it was moving, or whether it’s been sitting there for a long time; that information is lost. Many different pasts could lead to precisely the same present; whereas, if information is conserved, each possible past leads to exactly one specific state of affairs at the present. The conservation of information — which also goes by the name of “determinism” — is a profound underpinning of the modern way we think about the universe.

Determinism came under a bit of stress in the early 20th century when quantum mechanics burst upon the scene. In QM, sadly, we can’t predict the future with precision, even if we know the current state to arbitrary accuracy. The process of making a measurement seems to be irreducibly unpredictable; we can predict the probability of getting a particular answer, but there will always be uncertainty if we try to make certain measurements. Nevertheless, when we are not making a measurement, information is perfectly conserved in quantum mechanics: Schrodinger’s Equation allows us to predict the future quantum state from the past with absolute fidelity. This makes many of us suspicious that this whole “collapse of the wave function” that leads to an apparent loss of determinism is really just an illusion, or an approximation to some more complete dynamics — that kind of thinking leads you directly to the Many Worlds Interpretation of quantum mechanics. (For more, tune into my Bloggingheads dialogue with David Albert this upcoming Saturday.)

In any event, aside from the measurement problem, quantum mechanics makes a firm prediction that information is conserved. Which is why it came as a shock when Stephen Hawking said that black holes could destroy information. Hawking, of course, had famously shown that black holes give off radiation, and if you wait long enough they will eventually evaporate away entirely. Few people (who are not trying to make money off of scaremongering about the LHC) doubt this story. But Hawking’s calculation, at first glance (and second), implies that the outgoing radiation into which the black hole evaporates is truly random, within the constraints of being a blackbody spectrum. Information is seemingly lost, in other words — there is no apparent way to determine what went into the black hole from what comes out.

This led to one of those intellectual scuffles between “the general relativists” (who tended to be sympathetic to the idea that information is indeed lost) and “the particle physicists” (who were reluctant to give up on the standard rules of quantum mechanics, and figured that Hawking’s calculation must somehow be incomplete). At the heart of the matter was locality — information can’t be in two places at once, and it has to travel from place to place no faster than the speed of light. A set of reasonable-looking arguments had established that, in order for information to escape in Hawking radiation, it would have to be encoded in the radiation while it was still inside the black hole, which seemed to be cheating. But if you press hard on this idea, you have to admit that the very idea of “locality” presumes that there is something called “location,” or more specifically that there is a classical spacetime on which fields are propagating. Which is a pretty good approximation, but deep down we’re eventually going to have to appeal to some sort of quantum gravity, and it’s likely that locality is just an approximation. The thing is, most everyone figured that this approximation would be extremely good when we were talking about huge astrophysical black holes, enormously larger than the Planck length where quantum gravity was supposed to kick in.

But apparently, no. Quantum gravity is more subtle than you might think, at least where black holes are concerned, and locality breaks down in tricky ways. Susskind himself played a central role in formulating two ideas that were crucial to the story — Black Hole Complementarity and the Holographic Principle. Which maybe I’ll write about some day, but at the moment it’s getting late. For a full account, buy the book.

Right now, the balance has tilted quite strongly in favor of the preservation of information; score one for the particle physicists. The best evidence on their side (keeping in mind that all of the “evidence” is in the form of theoretical arguments, not experimental data) comes from Maldacena’s discovery of duality between (certain kinds of) gravitational and non-gravitational theories, the AdS/CFT correspondence. According to Maldacena, we can have a perfect equivalence between two very different-looking theories, one with gravity and one without. In the theory without gravity, there is no question that information is conserved, and therefore (the argument goes) it must also be conserved when there is gravity. Just take whatever kind of system you care about, whether it’s an evaporating black hole or something else, translate it into the non-gravitational theory, find out what it evolves into, and then translate back, with no loss of information at any step. Long story short, we still don’t really know how the information gets out, but there is a good argument that it definitely does for certain kinds of black holes, so it seems a little perverse to doubt that we’ll eventually figure out how it works for all kinds of black holes. Not an airtight argument, but at least Hawking buys it; his concession speech was reported on an old blog of mine, lo these several years ago.

CATEGORIZED UNDER: Science, Words
ADVERTISEMENT
  • Pingback: The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics()

  • Absalom

    “Not an airtight argument…..”

    Oh come. Neglecting tiny details like the sign of the cosmological constant, the timelike character of the AdS conformal infinity, the non-compactness of AdS spatial sections, the existence of a timelike Killing vector, trivial stuff like that, our Universe is EXACTLY THE SAME as AdS. So of course unitarity is preserved in black hole evaporation!

    Let me try again. Some black holes evaporate in a unitary way, therefore all black holes…..no wait, I will get this somehow……

  • Pingback: more storms « blueollie()

  • http://backreaction.blogspot.com/ B

    Hi Sean,

    Thanks for the link. I can’t avoid noticing you’ve made a leap there from determinism to unitarity? Evolution can be deterministic but that doesn’t imply it’s also unitary. Best,

    B.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Hi B– That’s true, I was being sloppy. But unitarity can be thought of as “determinism in both directions in time.” Unless you are thinking of something more subtle?

    Absalom, you seem to be missing the point quite a bit. It was not “all black holes are just like ones in AdS_5 x S^5”. It’s that all of the objections to unitary evolution apply there as well as they do anywhere else, yet we know that they are somehow avoided. It’s perfectly reasonable to think that they can therefore be avoided elsewhere, although it’s clearly not an airtight argument.

  • Ken Muldrew

    “This is the Clockwork Universe, and it is far from an obvious idea. Pre-Newton, in fact, it would have seemed crazy. ”

    Hmmm…several people actually built clockworks that simulated the universe several centuries before Newton was born (de Dondi, Wallingford, et al.). Surely a few people saw these great machines and thought, “…maybe not so crazy after all?”

  • Sam Gralla

    Haha, so presumptous… as if the “black hole war” is over. Normally experiment decides who is right, not famous people =). That said, it’s probably a fun read for a layman. I just hope they don’t come out of it thinking that in science what is right is determined by what Stephen Hawking thinks.

  • http://monstrousgaugetheory.googlepages.com/home mark a. thomas

    Something that has really bothered me is applying the unitaity argument to black body radiation modes. The example is this. If you have a photon gas in a black body cavity and you point a specific frequency laser into the cavity(specifically the photon gas bulk), then if you pulse the laser encoding a message (ie. morse code) the photons of the laser pulse will be eventually be included in the modes of the black body distribution. The difficulty of retrieving the message from the black body emission seems formidable. Laser photons will either interact with the walls of the cavity or be reflected multiple times before entering the black body distribution. Thus any pulse becomes indescernible and there would not necessaarily be any temperature flux with a pattern (the message). Some laser photons would even be promoted to a cascade of different frequency photons from interaction with the matter in the cavity walls, thus frequency may be changed. A solution to this would be the Holographic principle, whereas the laser pulse upon entering the photon gas bulk would be a smeared state (entering a non-classical state) like the rest of the bulk (photons wll not collide with each other). Any information in the bulk may write information on the boundary (cavity wall) and possibly enable the information to remain coherent. I understand that black body radiation is quantum unitary and that black holes are quantum unitary , but still how is the information that is emitted encoded?

  • Brett

    I could never figure out why Hawking’s idea that information was destroyed was ever taken seriously. The unwarranted assumptions in his various arguments were obvious to me when I was in high school! To wit, if you dump some information into a hot thermal system, it rapidly becomes intertwined with everything else going in that system. You can’t get the information out again without measuring ALL the subsequent radiation from the system. The same should be true of the black hole; you won’t be able to recover the information unless you watch the black hole radiate until it evaporates. The early stages of radiation may be semiclassical, but the final evaporation is not. Hence there is no reason to believe that the information that you put in initially is not imprinted on the final burst of radiation from the evaporating black hole. (And, lo and behold, that was exactly what was found to happen in AdS/CFT! Or at least, that’s one interpretation of the result; some people disagree that that’s what’s happening.)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Brett, I don’t think that’s right. People certainly understood that you need to capture all of the radiation to restore the purity of the final state. But there is a simple question of numbers: There is a huge amount of radiation that comes out when the black hole is large and purportedly semi-classical, while only a very tiny amount that comes out when it is small and close to the Planck scale. When you check things quantitatively, there is no way for the late radiation to restore the purity of the entire state. Somehow, information has to be encoded in the early stages as well.

  • http://www.geocities.com/aletawcox/ Sam Cox

    From Sean’s review in the Wall Street Journal:

    “And what was the outcome of the black-hole war? A Susskind victory, it would appear. It seems that information is not lost, even when black holes evaporate. In 1997, a young theorist named Juan Maldacena showed how, in certain cases, questions in quantum gravity can be “translated” into equivalent questions in a different-looking theory, one that doesn’t involve gravity at all — a theory, moreover, in which it is perfectly clear that information is never lost. So we don’t know exactly how information escapes when a black hole evaporates. But we can start with a black hole, translate it into the new theory (where we do know how to keep track of information), let the black hole “evaporate” and translate it back. A bit indirect, but the logic seems solid.”

    “a different looking theory, one that doesn’t involve gravity at all”….sound esoteric? Far from it, for in GR gravity, as “real” as it is, is a fictitious force, created by the way the momentum of the universe is observed from coordinates within a manifold.

    Sean, that was a very thoughtful- and informative- review!

  • Sili

    Very very tangentially related but … well, this is the blogosphere …

    There’s one thing – well, one thing in particular, I guess – that bothers me about the ‘many worlds’ scenario:

    Where does all the mass and energy come from?

    It probably just goes to show how little I understand any of it, but if … the universe/reality is a four dimensional manifold with some ‘stuff’ in it, how can the ‘stuff’ continually multiply to keep all these ‘many worlds’ populated?

    I’m sure I just have much too naïve a picture of the problem, but I’d really appreciate some pointers towards comprehension.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Sili– Different branches of the wave function don’t actually cost extra energy. Forget about many worlds, just think of ordinary quantum mechanics. If an electron is in a superposition of two different positions, the quantum state doesn’t have twice the mass as it would if the wave function were localized in one position. The mass is just that of one electron, but it’s in a superposition of two different possible positions. Likewise with the rest of the universe.

  • John Merryman

    Sili,

    You’re not the only one that’s confused. Obviously quantum information isn’t like what goes through our minds, but presumably it’s what our brains are made of.

    The energy is conserved, but I don’t see how every bit of information ever recorded by that energy is retained. Yes, we can assume one set of events leads up to this moment and one set of events will follow it, but an objective perspective is a contradiction. As Stephen Wolfram put it, ‘It would take a computer the size of the universe to compute the universe.’

  • John Merryman

    Sean,

    If the universe is one great super position, than does the concept of “information,” i.e. distinction, even apply?

  • Sili

    So … the MW sorta corresponds to the electron pre-observation? Would it be right to say that the MW ‘universe’ just is? Never actually observed in a manner that forces the energy into one single state?

    errrr … that can’t be right … or perhaps it’s just my struggling brain. I think it’s high time I reread some QM.

    Thank you.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    That’s exactly right. The MWI simply says that the wavefunction is always there, evolving peacefully according to the Schrodinger equation, without any collapses.

  • http://backreaction.blogspot.com/ B

    Hi Sean:

    But unitarity can be thought of as “determinism in both directions in time.” Unless you are thinking of something more subtle?

    I don’t mean anything very subtle. Yes, I meant determinism ‘in both directions in time’. But what I was trying to say is that an evolution of a wave-function can be deterministic without being unitary. Unitarity doesn’t only mean there is an operator H that works forwards and backwards but that the evolution you get from it is, well, unitary. Just add a damping term to the exponent that spoils the preservation of the norm. It’s still a deterministic one-to-one map but not unitary. Best,

    B.

  • theoreticalminimum

    Sean said:

    But unitarity can be thought of as “determinism in both directions in time.” *

    Unitarity ensures that probabilities add up to one. This definition itself implies determinism in “both directions of time”, right? But then one wonders whether it makes any sense in talking about causality in the “other direction of time”…

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    In any sensible theory of quantum dynamics, probabilities must add up to one. There is no theory of the world in which the probability of all mutually exclusive events adds up to anything other than one. There might be a mathematical formalism which leads you to such a conclusion — e.g. perturbation theory for massive vector bosons without a Higgs mechanism or other new physics — but that’s a sign that your theory is incomplete or ill-defined, not that probabilities really don’t add up to one.

    In particular, multiplying the wave function by a number less than one is not a violation of unitarity; you would simply renormalize so that the norm was one. Quantum states are not really vectors in a Hilbert space, they are equivalence classes of such vectors up to scaling by non-zero complex numbers.

    On the other hand, “unitarity” (i.e., evolution of the state vector is described by the action of a unitary operator) implies more than conservation of probability, it also implies reversibility (because unitary operators have inverses). If a well-defined theory is truly non-unitary, it’s not because probabilities don’t add up to one, it’s because the evolution isn’t invertible. For example, if many different states at time t1 evolved into some particular single state at time t2. That would be a genuinely non-unitary evolution (and is closely analogous to what happens when wave functions collapse).

  • Count Iblis

    Is a universe in which information is not conserved possible at all (in the sense that internal observers would notice this)? Sean gave the example of Aristotelian mechanics, but I don’t find that very convincing.

    If you observe that some object has come to rest and conclude that “all information about the intitial state has been lost”, the very fact that you know this means that not all the information has been lost. Some of the information about the previous state has been stored in your memory, otherwise you wouldn’t be aware of this fact.

    If information does not get lost when you are observing, then it is hard to see how you would arrive at fundamental laws of physics in which information really does get lost.

    It would be more natural to assume that the information that gets lost when you are not observing actually doesn’t get lost but ends up in some hard to detect degrees of freedom of the universe.

    This is a bit similar to the MWI. It is then perhaps not surprising that an attempt by ‘t Hooft to find a local deterministic theory underlying quantum mechanics led him to models in which information is not conserved. :)

  • http://www.geocities.com/aletawcox/ Sam Cox

    Sean’s post 20 is really interesting and is loaded with all kinds of subtle ramifications.

    A unitary universe is also a static (deterministic) universe, but the very nature of the GR spherical geometry (with the irrational “pi”) implies a certain mathematical irrationality in cosmology which results in both the existence of time and an irreversible time process. A universe which is everywhere, all the time constrains a powerful determinism, but it also points (“all the time”) to a universe which is eternally existing.

    I don’t believe it is at all conceptually trivial that we observe both the existence of time and an irreversible time process at our macroscopic coordinates. However, just as in engineering problems, the sum of the moments in the universal structure must total 0 to assure stability. What causes our existence is a foundational momentum, rigid structural constraints, an overall conservation of matter, energy- and a multi-faceted entropy. Perpetual order is foundational to existence, but so is change…ordered- if gradual- change.

    Since the SR/GR/QM universe exists only as it is observed, it can be seen that any increase in overall thermal entropy can thus, over eternity, be traded for a slight decrease in informational entropy (increase in complexity), conserving, not only matter and energy, but the overall total entropy, themal, informational, and by implication, observational in the system. The universe (contrary to our strong first impression) is gradually and irreversibly becoming more complex in its overall structure, yet it never exactly repeats itself when observed on 4D event horizon surfaces…in a very important sense, it is not perfectly unitary.

    The whole is greater than the sum of its parts… the sum of the terms is, in an important way, greater than 1. The 33RPM record is a lower dimensional projection which includes periodicity, motion and change- of and within an ordered structure. In the same way, a human being is more, much, much more, than a certain total mass of oxygen, hydrogen, carbon, nitrogen..etc.

    One final thought. The mathematical irrationality of the system applies within the manifold from which the universe is observed at sets of coordinates… pi is not necessarily an inherent quality of the Planck Realm. Hence, the non-unitary universe we feel we observe may in fact, ultimately and cosmologically, be truly unitary.

  • Brett

    Sean, it is certainly not uncontroversial when exactly the information passes out of the black hole, but there is no quantitative reason why it can’t all come out at the end. The basic reason is that there are enough correlations between the final burst and the previous radiation to contain all the information. So the final burst on its own doesn’t contain the information, the entire radiation history does; but it can’t be decoded until the evaporation is observed.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    No, that’s just not true; there are not enough correlations in the final radiation to restore the purity of the final state. (Unless you change the rules of Hawking radiation so that the final photons come out extremely slowly and with very low energies, so that you essentially have a stable remnant.) Think of it this way: if a small number of particles are correlated with a large number of particles, that larger number of particles must clearly have been correlated with each other. See this paper by John Preskill.

  • http://www.gregegan.net/ Greg Egan

    Sean wrote (#20):

    If a well-defined theory is truly non-unitary, it’s not because probabilities don’t add up to one, it’s because the evolution isn’t invertible. For example, if many different states at time t1 evolved into some particular single state at time t2.

    Interestingly, though, any linear map that preserves an inner product will be one-to-one, even if it doesn’t have an inverse and hence is non-unitary. (That’s easy to prove; just define a norm from the inner product, then that norm will be preserved. And since ||Tx-Ty||=||x-y||, if Tx=Ty then x=y.)

    For example, the right-shift map on a countably infinite Hilbert space, R((x1,x2,x3,…))=(0,x1,x2,x3,…) preserves the inner product. This isn’t unitary because the left shift L((x1,x2,x3,…))=(x2,x3,…) is only a “left inverse”, LR=I, but RL is not equal to I.

    So there’d definitely be something weird (and non-time-reversible) about a physical system that evolved under R … but different initial states still evolve to the same final state.

  • http://www.gregegan.net/ Greg Egan

    Aargh, I wrote: but different initial states still evolve to the same final state which should of course be but different initial states still evolve to different final states.

  • Dave K

    Nice review, which I read earlier today in the dead-tree Wall Street Journal. All this reminds me of Steve Martin’s bit “My Uncle’s Metaphysics”, from his book Cruel Shoes. Here it is in its entirety:

    My Uncle’s Metaphysics

    My Uncle was the one who developed and expounded a system of cosmology so unique and unexpected, that it deserves to be written down; his papers were destroyed by fire. I am reconstructing his philosophy from memory as he told it to me on my birthdays and other such holidays. We would be sipping lemonade, perhaps, and he would begin to rock and peer at the sky on those cool afternoons, and with a slow drawl, begin to explain in the cleanest logic why the sky existed, why the universe was the total of all information yet unknown, and how each star in every galaxy could be plotted and predicted by a three dimensional number system. Then he would explain to me his numerical device called random mathematics, where any equation could be unbalanced for any reason that existed. With it, he predicted to the minute the gestation period of the white giraffe.

    As the afternoon rolled on, he fluently spoke philosophy and lost all inhibitions of language, explaining complex ideas with gestures, it seemed. He expressed how sorry he was I had ever heard the word God, and then said something about M39. (Later I discovered that this was a method of numbering the galaxies.)

  • Lawrence B. Crowell

    The loss of information in black holes is some form of encryption of quantum information. It is likely some form of entanglement phase of a vacuum state near or across the horizon, which is close to the null congurence the vacuum which comes in from I^-. The entanglement of states inside and outside the black hole becomes complicated by its coupling with the vacuum later on (equivalently further out). I thnk the process is similar to chaos theory. The underlying dynamics are completely deterministic, but we lack the data processing or accumulating ability to make the prediction. Similarly, quantum information is perfectly preserved, but it becomes “mixed up” or encyrpted in a highly complex form.

    An infalling observer will observer Hawking radiation. However once the observer enters into the region where the radiation is being produced ~ 1/8piM, the BB radiation distribution becomes modified as the long wavelength stuff disappears. Eventually very close to the horizon then sees no Hawking radiation and detects essentially a pure vacuum. This transition reflects a scaling where coherent states or entanglements further out are randomized or “recoded” so that one can’t make unitary predictions.

    It has to work this way. If black holes really destroy information it would seem that unification with particle physics may be impossible.

    Lawrence B. Crowell

  • Pingback: Linkblogging for the last few days « Thoughts on music, science, politics and comics. Mostly comics.()

  • http://backreaction.blogspot.com/ B

    If a well-defined theory is truly non-unitary, it’s not because probabilities don’t add up to one, it’s because the evolution isn’t invertible.

    Even though I believe that the problem with information loss lies in the time evolution not being reversible (indicating that the problem is the singularity and not the horizon) which seems to agree with your point of view, I can’t but wonder if there’s a proof for that claim that you have made. As I said in my first comment, it seems to me like a leap in argument where you’ve gone from determinism to unitarity. Best,

    B.

  • http://backreaction.blogspot.com/ B

    Sean: It occurred to me what I wrote in the previous comment is utterly unclear, sorry. What I was trying to say: You start up talking about determinism, then go on to talk about information being conserved in qm which I interpret as talking about unitarity. My question is, if you’d have a well-defined deterministic evolution of the wave-function in bh collapse, how would you know it’s also unitary? I think you just don’t know, but I’d be more than happy if I was wrong.

  • Count Iblis

    B, wouldn’t any non-unitary effects in black hole evaporation show up in processes not involving black hole via the effects of virtual black holes? So, perhaps this could be observed as a faster than expected decoherence rate…

  • Brett

    Sean wrote:

    Unless you change the rules of Hawking radiation so that the final photons come out extremely slowly and with very low energies, so that you essentially have a stable remnant.

    Which seems to be exactly what happens, except that I wouldn’t characterize this as changing any rules, because it occurs in a regime explicitly outside the validity of Hawking’s calculations. And that’s the whole point: if you assume that the region where quantum corrections are important doesn’t solve the problem, then quantum corrections don’t appear to solve the problem.

  • Moshe

    Brett, you seem to imply that we know what happens in the last stages of evaporation of (small) asymptotically AdS black hole, is that true? I’d be somewhat surprised, even more so if remnants were involved.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    B, I don’t have a good idea of what the experimental consequences would be, I suspect that would depend on precisely what kind of well-defined non-unitary deterministic evolution you were talking about. I don’t know of any proposals along those lines, but it would presumably depend on the model.

  • Jason Dick

    So … the MW sorta corresponds to the electron pre-observation? Would it be right to say that the MW ‘universe’ just is? Never actually observed in a manner that forces the energy into one single state?

    errrr … that can’t be right … or perhaps it’s just my struggling brain. I think it’s high time I reread some QM.

    As Sean mentioned, in MWI, there is no collapse. There’s just the appearance of collapse because the different components of the wave function lose the ability to affect one another significantly.

    To attempt to illustrate this, consider a system that is in a superposition of two states, A and B. If we then make a measurement of the state, we will find that we only observe one state: either A or B. Why is this?

    In the MWI, the perspective is this: what is the consequence of observing? In order to observe, our wave function must necessarily interact with the wave function we are attempting to observe. Because our wave function is this big, complex, messy beast, this interaction effectively prevents the combined wave function from interfering. That is to say, once the measurement has been performed, outcome “A” and outcome “B” can no longer communicate with one another. So, in the MWI, it’s not that the wave function collapses, it’s that we are part of that same quantum mechanical system, and the system loses the ability to interfere once we try to measure it.

    This has the direct consequence of the wave function that makes up us splitting when we observe such a situation: our wave function becomes a superposition of two states, but because those two states can’t interfere, they can’t obtain information about one another, and thus we only ever observe ourselves as existing in one of the two states, while another self observes itself as existing in the other.

  • http://backreaction.blogspot.com/ B

    Hi Sean,
    I didn’t have anything specific in mind, it was more a general question of reasoning. See, I would say, if there is no singularity then the evolution must be deterministic. The whole point of there being a singularity is that different initial states get crunched into the same infinity. If you now go and say, well if it’s deterministic then it also has to be unitary you come to conclude removing the singularity removes the information loss paradox. Just that, as I was trying to say, as far as I can see that conclusion doesn’t hold because you don’t get unitarity for free. Best,

    B.

  • http://www.geocities.com/aletawcox/ Sam Cox

    It might be good to try to understand this process by which black holes store information by conceptually looking at the situation in reverse. Rather than seeing everything collapsing into a black hole, lets consider the process of (cosmic) black hole formation from the standpoint of invariant observing frames in the manifold…the coordinates of information and complexity stored there.

    Everything in the universe- all information and complexity- stays in the same “place”. It is time and space which periodically cease to exist, and in that instant, as they do (cease to exist), the universe is only information…an ultimately low entropy object. Lawrence’s comments on this thread are very interesting…he refers to the entanglement of information, vacuums and related phenomena. If we view the universe as I just described it, we can appreciate both the significance of entanglement- and the possible reality of cosmic unitarity, which Sean has been discussing.

    There are some interesting mathematical and geometric relationships here which strongly imply, when we compare them with the observed radius of the universe, for example, that understanding the universe in the above way is better than trying to understand how everything can be compressed into a black hole, yet continue to exist as information.

    The big bang is confirmed by powerful field evidence. We even know how long ago the big bang occurred, and the present radius of the universe…as observed in the astronomical antipode, from our frame. However, the actual nature of the big bang, and the extent of the periodicity involved in the cosmic evolution, I would think, will eventually be determined by the investigation of the sub-microscopic universe, from the quark level on downward…matter/antimatter occillations…things of that nature.

    A thought provoking and interesting thread!

  • Coldcall

    The intro to this book review caught my attention – never mind the book itself.

    Are you seriously suggesting a viable hidden variables theory is going to appear out of thin air and rescue the Determinist cause? Did i read that correctly?

  • CEO of Hahvahd University

    Dear Mr. Sam Cox,

    Your insights are very interesting, and we here at Hahvahd have read your paper “Seven Dimensional (and up) Einsteinian Hyperspherical Universe” (which you linked to in your comment above), and we are prepared to offer you a faculty position as an Assistant Professor, with very generous benefits. Here at Hahvahd, we pride ourselves in recruiting the very best of the best, and your works have shown to us that you fit Hahvahd’s needs very nicely.

    We especially like the non-mathematical of your work, as working through the precision of endless equations has gotten very tiresome over the years. Your future students will very much appreciate your verbose and non-mathematical research style.

    We hope this research style will lead to the next big revolution in physics, and we expect that you’ll be the next Albert Einstein.

    Welcome aboard, Sam Cox, and we’ll see you in September 2008!

    Sincerely,

    The CEO of Hahvahd University

    P.S. Go Red Sox!

  • jeff

    our wave function becomes a superposition of two states, but because those two states can’t interfere, they can’t obtain information about one another, and thus we only ever observe ourselves as existing in one of the two states, while another self observes itself as existing in the other.

    The biggest question in MWI though, is why do you, as a macroscopic conscious object, find yourself in the particular state or branch that you’re in? It will do no good to say that there are many other instances of “you” in other branches. “You” are in this branch – that is what is manifestly real and observed. and therefore other “you’s” are not equivalent to this “you”, simply by virtue of what is being observed. How does MWI deal with the conscious identity problem?

  • jeff

    (sorry, I messed up the reply – the first paragraph should be quoted)

  • Lawrence B. Crowell

    Sam Cox: … Lawrence’s comments on this thread are very interesting…he refers to the entanglement of information, vacuums and related phenomena.

    I like the analogue with chaos theory in a way. The event horizon carries the early vacuum with it before the black hole was formed. Regions removed from it pertain to a later vacuum. So for a vacuum mode very close to the event horizon may have an entanglement (or approximate one) with a mode inside the black hole, just on the other side of the event horizon. Yet if that mode propagates outward it pertains to later vacuum states which are not unitarily equivalent to the vacuum right near the horizon. The entanglement decoheres and the vacuum mode becomes “vacuum plus random radiation.” A chaos analogue might be the laminar flow of a gas from a series of small holes in a wall. The flow remains laminar up to a point, where the flow then begins to break up into vortices and turbulent flow. At this point predictability begins to fail due to the nonlinearities in the flow. Yet we know that on a particle level the dynamics are prefectly deterministic In an analogous manner the gravity field is nonlinear and it propagates vacuum modes into regions where entanglement phases become scrambled up. Again on a quantum bit level things are perfectly deterministic, but on a coarse grained level things appear random and non-deterministic.

    In Wolfram’s book “A New Kind of Science” he writes about the Hadamard matrix. This has a number of interesting properties, they are recursive, such as H_(2^n} is constructed from H_{2^{n-1}), and they play a role in chaos and the “emergent complexity” Wolfram writes about. Hadamard matrices form the basis of Reed-Muller error correction codes. They are also the Weyl group for some sporadic group systems, such as the Leech lattice.

    It is not possible for me to go into great detail on this, but these sphere packing models, such as the Leech lattice with three E_8’s in a modular system, may be error correction codes for how entangled modes, or vacua, ultimately preserve all of the quantum bits. This would be the case even though on a coarse grained level things appear completely randomized.

    This appears to be a trend in physics. The underlying dynamics, or how information is shuffled around, is perfectly deterministic, but in collective or complex systems there is a randomizing process which sets in. This might be due to nonlinearity, such as with gravity, chaos or hydrodynamics, or due to a large number of elements or atoms, such as thermodynamics. The underlying structure or fine grained physics is completely deterministic and information preserving, but systems at large exhibit complexity or chaos which makes this determinism impossible to assertain without the approrpriate fine grain data and the “error correction code.”

    Lawrence B. Crowell

  • http://www.geocities.com/aletawcox/ Sam Cox

    Lawrence Crowell said,

    “This appears to be a trend in physics. The underlying dynamics, or how information is shuffled around, is perfectly deterministic, but in collective or complex systems there is a randomizing process which sets in. This might be due to nonlinearity, such as with gravity, chaos or hydrodynamics, or due to a large number of elements or atoms, such as thermodynamics. The underlying structure or fine grained physics is completely deterministic and information preserving, but systems at large exhibit complexity or chaos which makes this determinism impossible to assertain without the approrpriate fine grain data and the “error correction code.”

    It is interesting that even in the macroscopic world we see a shadow of determinism and predictability. We observe periodicity, stored information and complexity. We note time direction and process. The morphology of conception and birth is very different from death and decomposition. Yet, even within the biological world, at lower levels of scale, these “obvious” morphological differences become less obvious. The Paramecium is continuously living and dividing, unless the whole population perishes. Trees reproduce vegetatively as well as sexually.

    If we accept that fine grained physics is completely deterministic, it would seem that we would be also inclined to accept the idea that cosmologically, that is the way things are. Our way of experiencing the universe, observing and measuring it at macroscopic scales, while real to us, is but a product of relativistic and quantum effects observed in the manifold of 3+1 dimensions…scale and time.

    We were speaking to this idea on another thread. Relativisitc and Quantum effects are real. Accelerations and their resulting relativistic effects on 4D particulate event horizon surfaces when remotely observed are so real we can build technology on them…they are us and our world. Quantum mechanics is the basis for a whole new technology. Also we know that all of these relativisitc and quantum relationships can be mathematically descibed and compared with the results of experiment.

    Your analogies are excellent, and you make it clear that a complete understanding of the process requires “appropriate fine grain data and the error correction code”. For some of the same reasons you have discussed, I’m inclined to see the universe as foundationally and cosmologically deterministic, yet observed (and only observed) differently. It seems to me there is a dichotomy between the way the universe actually exists, and the way we observe it…that there may be little or no actual randomness at all, anywhere in the system, except as we observe, define and experience it at our coordinates.

    I honestly however, do not believe that our world of choice and free will is a complete illusion. After all, it IS our world. We observe motion and change, and a unique reality which has a very advanced holographic quality about it. In some way, what we cumulatively do and experience influences the overall direction of informational entropy. While the universe is very deterministic, and is vast, and though it exists permanently, yet, being finite in mass, (momentum) the cosmos rides a fine line between existence and non-existence. “Life” functions as the force which compensates for a slight increase in overll thermal entropy, by decreasing informational entropy ( increasing complexity).

    On the GR side, I picked up the following excerpt from Wikipedia which also applies to this discussion:

    “Each solution of Einstein’s equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. By this token, Einstein’s theory appears to be different from most other physical theories, which specify evolution equations for physical systems: if the system is in a given state at some given moment, the laws of physics allow extrapolation into the past or future. Further differences between Einsteinian gravity and other fields are that the former is self-interacting (that is, non-linear even in the absence of other fields), and that it has no fixed background structure—the stage itself evolves as the cosmic drama is played out.[145]”…

    Thanks for your remarks! Sam Cox

    PS Jeff, I think a partial answer to your question may lie in the universal geometry, and the relationship between the way the cosmos exists and the way it is observed, time process and irreversibility in the macroscopic etc. An extra 3-space and fixed coordinates of observation might explain why we only observe one of ourselves at a time…Your comments are very thought provoking.

  • John Merryman

    Lawrence,

    blockquote>The underlying structure or fine grained physics is completely deterministic and information preserving, but systems at large exhibit complexity or chaos which makes this determinism impossible to assertain without the approrpriate fine grain data and the “error correction code.

    Isn’t that based on the assumption there is a quantum fine grain? What if it isn’t ultimately digital, but analog? That process is cause and structure is effect. rather than the other way around. Yes, we can only measure structure at the microscopic level, but why is that proof function follows form, when form follows function at every other level. Strings may just be vortices.

  • ree ree

    Dear Sam Cox, Lawrence Crowell, Jeff, John Merryman, and Qubit:

    Go study some physics and shut up.

  • Pingback: Noticias GL » The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics()

  • John Merryman

    reeree,

    But I might get more confused!!!!!

    I have enough trouble trying to make sense of this world and universe.

  • ree ree

    Listen, you clowns, cut it out. Go stalk Stephen Hawking for his autograph.

  • John Merryman

    reeree,

    Even though it is only out of sheer ignorance on your part, I do thank you for putting me on the same list as Lawrence.

  • Lawrence B. Crowell

    Well it appears that reeree has assumed the role of Bill O’Reilly here with “Just shut up.”

    It might be of interest to note that what I have written with respect to the decoherence of the vacuum state with respect to the radial distance from the event horizon is remarkably similar to what I found at:

    arXiv:quant-ph/0302179v1

    which discusses a similar issue with respect to Unruh radiation.

    It would seem to me that this bog can serve as a way of discussing possibilities or new ideas.

    Lawrence B. Crowell

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I’m going to step in and agree with ree ree’s angle, even if his tone is rude. This is not the place for discussing “possibilities or new ideas,” when that is a code for stuff that non-mainstream/crackpotty/under-informed. We’ve been very lax about letting it go, but we’re going to start cracking down a lot more, so don’t be surprised if comments start mysteriously disappearing or commenters just start getting banned. We don’t have the time or patience for long back-and-forths with each alternative point of view. Suffice it to say, we are the Voice of the Establishment, working hard to suppress alternative viewpoints, because we are afraid for our cushy jobs and positions in society.

  • ree ree

    Oh dear! I’m very sorry, Dr. Crowell! Please accept my apologies, sir. I had no idea you had published technical physics books. In fact, I think I will purchase your book “Can Star Systems Be Explored?”, as it sounds very interesting. It’s a topic I’ve always been interested in, actually.

    It was certainly a mistake to lump you in with those clowns — Merryman, Qubit, Jeff, and Sam Cox. As for you four, I repeat: go study some physics and shut up. Especially if your “works” have no equations in them. The language of physics is mathematics, not the fantasies of crackpots.

  • FeralPhantom

    Sean: “Suffice it to say, we are the Voice of the Establishment, working hard to suppress alternative viewpoints, because we are afraid for our cushy jobs and positions in society.”

    You forgot to capitalize ‘we’. 😉

  • Lawrence B. Crowell

    Energy and information are related in the first law of thermodynamics plus the Shannon-Khinchin theorem or formula.

    L. C.

  • jeff

    “ree ree”: All I did was point out a well-known philosophical question about MWI, – i.e. why do you find yourself in a particular branch or world rather than another. If that question makes me a clown, then so be it. And if you can find a mathematical explanation for it, be my guest. Actually, it was probably off-topic, but then the post I was responding to would also be off-topic.

    If philosophical implications of physical theories are off-limits on this blog – well, fair enough. However, I do see a fair amount of prose here that is non-mathematical, and sometimes philosophical.

  • ree ree

    Jeff,

    If the MWI is correct, then “we” exist in all possible branches. Why am “I” aware or conscious of only this particular branch? I have absolutely no idea. Thinking about this is like thinking about what happens to me after I die, assuming there is no afterlife. Can you imagine nothingness? Anyway, it is my opinion that the MWI is crap, just like time travel into the past. The best and simplest explanation I can think of as to why I find myself conscious of and existing in this branch rather than the zillions of other branches is that there are no other branches but this one, because MWI is wrong. Quantum mechanics is weird, but I don’t think the “solution” to this weirdness is to posit the existence of an infinity of other universes.

  • Lawrence B. Crowell

    MWI is a quantum interpretation, which is some sort of auxilliary idea meant to “explain” some aspects of quantum strangeness. There are other of these ideas, such as Bohm’s subquantal or inner classical approach. I find the decoherence approach of Hartle, Gel Mann and Zurek and others to be more realistic, for it just discusses entangelement loss in coupling a system to a reservoir of states and in part avoids these inner-quantal ideas.

    I think it best to avoid getting too wrapped up into quantum interpretations. I don’t think these things largely buy you much. MWI has achieved a measure of popularity of late, but I wonder if before long that will fade.

    MWI implies that we do walk along some path of that is constantly being split off from other paths. One might ponder whether we live these other paths, and whether there is a sort of quantum immortality, where we do in fact consciously experience these paths (eg other lives). Of course again I doubt these ideas don’t really contribute much to any real understanding of physics.

    Lawrence B. Crowell

  • http://www.pieter-kok.staff.shef.ac.uk Pieter Kok

    LBC, an interpretation of quantum mechanics is not “some sort of auxilliary idea meant to “explain” some aspects of quantum strangeness”, it is the translation of the mathematical framework of quantum theory into a coherent and consistent picture of the world it describes. As such, it is an integral part of quantum physics, just as the interpretation of Maxwell’s equations is an integral part of electrodynamics. Otherwise it is just maths.

    I have it on good authority that the many-worlds interpretation is the current front-runner when it comes to coherence and consistency in interpretations of quantum mechanics. Philosophers of science that I know (Oxford, Cambridge, Maryland) who were not particularly charmed by the MWI are now slowly coming around to it.

    As to whether “these things buy you much”, you should be aware that David Deutsch and Peter Shor came to their insights in quantum computing because they adhere the MWI. The “correct” interpretation of a physical theory matters!

  • Sad

    Doesn’t MWI imply the QTI? And isn’t that a very BAD thing, almost nightmarish? Not to mention almost impossible to believe?

    Isn’t that why David Lewis said something about “you should shake in your boots” if MWI is true?

    Thinking about quantum “stuff” already boggles my mind; the MWI just adds to my headache.

    Of course, my own personal feelings have nothing to do with whether something is true or not, but still……..

  • mathematician

    I think that MWI comes from laziness, and taking models too seriously.

  • collin237

    How do you consider “information” a scientific or physical quantity in the first place?

  • Count Iblis

    Sad, I don’t think that the MWI impies QTI. I think I tried to explain that some time ago on this blog. Basically the idea is that in the MWI you have a static wavefunction, the branches of which contain the sectors in which you live. A time evolved version of you is simply just another branch of that static wavefunction.

    The branching you experience as time goes by is just what seems to happen relative to you at some time, it is not the case that the entire wavefunction of the multiverse continues to split. So, all the possible versions of you just exist a priori in the multiverse, each with their subjective notion of time. There are only a finite number of possible versions of you.

    So, the whole idea of QTI that at each moment the entire wavefunction splits and if you are old and sick and most branches end in you being death so that you must always end up in that branch were you miraculously survive is simply false. What happens in the MWI is simply that you are always one of the finite number of versions of you that exist with some a priori probability.

  • http://www.pieter-kok.staff.shef.ac.uk Pieter Kok

    collin237, think of it as a degree of freedom.

  • Lawrence B. Crowell

    Quantum interpretations can play a role, so long as you are temperate. Bohm’s approach to QM has a pilot wave with analogues to the Navier-Stokes equation, which might allow one to examine certain types of problems. Also the particle or “beable” might be a good model tool for quantum chaos. Does this mean that I think there really is this inner classical reality to QM? Not on your life! MWI has become popular to a degree, and as pointed out has become a tool in modelling quantum information. As tool it might be fine, but the problem is that anything beyond that seems to become quantum religion.

    I might also put the Anthropic prinicple, or the strong AP, in the domain of quantum religion. I am not sure if there is a cosmological principle which requires the existence of conscious observers. It does appear very evident however that in the case of conscious observers here that our biggest function is to tear down this planet’s biological systems and to replace them with garbage and toxic wastes. We appear to be less of some cosmic mindscape and more as some sort of planetary bio-dysfunction or terminator species.

    Quantum interpretations do have some utility, but I think they are in a loose sense an aspect of quantum complementarity, or that QM has multiple ways in which it can be examined or modelled. At best quantum interpretations are like cards in your hand — you play those which are needed at the right time.

    Lawrence B. Crowell

  • http://www.pieter-kok.staff.shef.ac.uk Pieter Kok

    Anyway… back to the book: I just finished it, and I thought it was great fun to read. His style is very engaging, with lots of interesting anecdotes. I also like it that he does not use the worn-out analogies to explain quantum mechanics, general relativity, etc., but comes up with new ones.

  • collin237

    collin237, think of it as a degree of freedom.

    Do you mean a number of degrees of freedom?

  • collin237

    It does appear very evident however that in the case of conscious observers here that our biggest function is to tear down this planet’s biological systems and to replace them with garbage and toxic wastes.

    If anything, this only affirms our role as “collapsers”. :)

  • Lawrence B. Crowell

    MWI appears convenient in information theory because a q-bit of the form

    $latex
    |psirangle~=~sin(theta)|0rangle~+~e^{iphi}cos(theta)|1rangle
    $

    will reduce to the 0 state in one world and the 1 state in the other. It is a convenient way of looking at things, in particular with teleporation where a state must be communicated by a classical signal. Yet we could well consider a state reduction as an interaction with an environmet. Say if there is some detector “needle state” which give

    $latex
    |psirangle~rightarrow~sin(theta)|0rangle|-rangle~+~e^{iphi}cos(theta)|1rangle|+rangle,
    $

    for the + and – states for a detector. The reduction of the state is when the off diagonal terms in the density matrix are attenuated away. The entanglement phase is lost to the environment — it still exists, but is scrambled up. The outcome is then reduced to a classical “collapse” in one world, rather than single results in two worlds. It is interesting that in both MWI and decoherence we are still left with a dualism — two possible results in this world, or single results in two possible worlds. Take your choice!

    MWI can well enough be used for certain applications. Some people appear to regard it with near religious fervor.

    One of my crazier questions is whether there is some sort of categorical system or what might be called a quantum “functor,” which can demonstrate an equivalency between quantum interpretations.

    Lawrence B. Crowell

  • Lawrence B. Crowell

    collin237: If anything, this only affirms our role as “collapsers”.

    Before anything — sorry for not normalizing the above states.

    We might almost think of this as a dual principle. Maybe cosmic observers in some Karma sense come to know the universe in it final principles, and then die. Maybe it is a cosmic orgasm, but like a black widow male spider who dies after the process.

    Our situation here is frankly very bad, and its is not just global warming. The oceans are in a rapid die off, which in the last decade has raced forward. Imagine if astronauts on a spaceship started to tear up their craft in order to make entertainment systems.

    Lawrence B. Crowell

  • Lawrence B. Crowell

    Oops, I apologize for the apology. The states are normalized! The quantum functor calls forth: Bohm, Everett-DeWitt, Zurek, Deutsch, … all have the same quantum muse! But I have no idea how to show this! :-)

    L. C.

  • collin237

    The entanglement phase is lost to the environment — it still exists, but is scrambled up.

    Not necessarily. In a quantum computer, one can assume that the states |0> and |1> in whatever device physically stores a q-bit are stable enough that it’s meaningful to speak of the phase e^I phi between them.

    However, the states |+> and |-> are not that simple. If there’s any way to define a phase between them, it would require that they both refer to an intact needle. In some interpretations, the unobserved state actually “dissolves”, rather than just “leaving us”. From that standpoint, the phase would actually drop out of the universal information.

  • Sad

    Thank you Count Iblis.

    I’m no scientist, and much of the time I have no idea what y’all are talking about on this blog, but that QTI stuff (insanely nutty) on top of the MWI (a little bit nutty), had me chewing my fingernails.

  • Lawrence B. Crowell

    To collin237: The complexity of the detector states come in because there may be a whole gemish of states between the system states and some quantum state for the “needle” of the detector. In what you say it amounts to some great trace out of the density matrix to get rid of the complexity. This works FAPP, but there is still a troubling aspect to this. We are effectively destroying a quantum bit in the process. Even if the phase of our system is well specified to start, the stuff we trace out represents reservoir states with lots of various phases, which in the argand plane point everywhere (FAPP) and we are really just saying our nicely prepared phase is being buried away in that noise.

    Something similar happens to an infalling observer to a black hole. When the observer approaches the horizon, the notion of a well-defined particle number loses its meaning at the wavelengths of interest in the Hawking radiation; the oberserver is ‘inside’ the particle producing region. As such the observer does not encounter an infinite quantity of particles. On the other hand, energy does have a local significance. In this case, however, although the Hawking flux does diverge as the horizon is approaced, so does the static vacuum polarization, and the latter is negative, or relative to regions removed from the horizon. The infalling observer cannot distinguish operationally between the energy flux due to the oncoming Hawking radiation and that due to the fact that he is sweeping through the cloud of vacuum polarization.

    These vacuum polarizations are regions where distinct points near the horizon can hold entangled vacua related approximately by unitary transformations. However, as the distance between two vacua in a polarization increases this breaks down and one has “vacua plus particles,” and there is a randomization of phases. As on coordinates suitable for the outgoing modes approaches I^+, they ultimately get the Hawking temperature at infinity. The radiation is completely decoherent, thermal and indistiguishable from any other black body source observed at a great distance.

    Lawrence B. Crowell

  • http://www.pieter-kok.staff.shef.ac.uk Pieter Kok

    colin237, one bit is a single degree of freedom of a system with two possible values. When the system is quantum mechanical the bit is called a qubit.

  • collin237

    The important question is how can the universe be said to be a well-defined number of qubits? To make such a count, wouldn’t you have to dig deeper than sensible physics would allow?

  • Lawrence B. Crowell

    One of the strange things about q-bits is they can in certain entanglements be associated with negative entropy. This is a bit subtle to go into here. Yet it does suggest that lots of positive entropy in our local region of the universe is due to how EPR pairs become inaccessible across cosmological horizons, or e-foldings. We might then suspect that the total number of actual qubits in the universe is very small — maybe zero. This might go somewhere in telling us why the universe started out with a particularly low initial (initial wrt. inflation) entropy.

    Lawrence B. Crowell

  • collin237

    Interesting way of looking at it. But the horizon isn’t necessarily a boundary in actual space. It could also be a boundary between what can and cannot be known. If the universe contains more information than can be stored in its qubits, it seems to me it might be stored in your dreaded “inner classical”. Just how I see it anyway.

  • Lawrence B. Crowell

    Event horizons occur when there is some region or congruence of geodesics which have zero length. This of course occur where a light rays exists. An event horizon is different from a light cone, which is also a type of null surface, where a light cone is a projective blow up of a point. An event horizon is not a system of projective rays in quite that sense, but is a congruence of null geodesics.

    Further, an event horizon has different structure for different spacetimes. A black hole event horizon is one where energy is conserved. The timelike Killing vector K_t defines K_t*K_t = g_{tt} = (1 – 2M/r) and maintains a system of isometries which conserve energy in the Schwarschild metric. For comsologies there is no such system of Killing vectors or isometries. The cosmological event horizon at

    $latex
    L~=~sqrt{3/Lambda},~Lambda~=~cosmological~const
    $

    defines regions of spacetime beyond which we can’t communicate. In a spacetime diagram this involves a system of null rays which connect to the big bang or the initial quantum event (so we can see all the say back!) but where these null rays spread out and then converge to our past light cone. This is because the lightcones for different frames point in different directions. In other words a spatial surface in the universe may be flat, but it is not embedded in a flat spacetime. As a result from a q-bit perspective there may be EPR pairs in one observer’s frame which is entangled with another region where any teleportation fidelity is limited by the cosmological horizon.

    Lawrence B. Crowell

  • collin237

    If there are no Killing Vectors, how can there be a preferred definition of L?

    And what about the “homogeneous distribution” ansatz of FRW? Isn’t that an isometry?

  • Lawrence B. Crowell

    The cosmological event horizon is frame dependent. A galaxy “way over there” sees a different bubble than we do. The cosmological horizon exists in a way similar to how the Rindler horizon exists for an accelerated frame.

    Ned Wright’s website has some diagrams of what I am about to describe, in particular the page 3. Draw a vertcal line connecting a point and draw little light cones along it. Now draw two slanted lines connected to the vertical lines at the starting point. Draw little light cones which are skewed away from the vertical line. Keep doing this with more and more lines which diverge away closer to the horizontal and with light cones that are skewed further away.

    Now choose a point on the vertical line — that is your “now” or time = 0. Draw a null line from your now which is tangent to all the skewed light cones on the other lines. This will define a football shaped curve which connects up with the origin = big bang. This defines the radial distance to which you can observe the past universe. Anything outside this teardropped or football shaped null ray is outside of your capability to observe. What is interesting is that you can observe virtually all the way to the beginning, where this includes the earliest regions of the universe that are fantastically distant. The CMB region is about 80 billion light years out on our current Hubble frame! This is because the spatial points have been comoved outwards — which is why we skew those light cones on the off vertical lines! If we could observe neutrino or gravity waves from even earlier regions of the universe we will be seeing regions of the universe that on the Hubble frame are now vastly distant from us.

    Now with the diagram draw future directed null rays, which will define an open cone. The region in the cone defines the future region of the universe we can send a signal to. It is clear there is a whole lot of universe we can never send a signal to, which includes regions that we can see in the past. So a distant galaxy out there we see could be outside or domain of future causal contact.

    The cosmological horizon is a distance which defines this past and future domain in any local observer’s domain of observation or signal sending. This distance

    $latex
    L~=~sqrt{3/Lambda}
    $

    defines loosely the region where the comoving of points on a spatial manifold begins to shift these points away fast enough to prevent future contact. It is similar in a way to the Rindler horizon where an inertial observer is no longer able to send a message to an accelerated observer. Of course there are differences here, for here the local horizon is induced by the comoving of points on a Hubble frame.

    Lawrence B. Crowell

  • collin237

    I was going to ask you what would happen if an entangled pair of qubits was sent from an emitter to a pair of receivers that had moved out of the range of ever being able to compare their results, but it occurred to me that by then the Hubble expansion might have stretched the qubits so widely that they wouldn’t be detectable anymore. Is that actually a mitigating issue, or is there something about decoherence that I’m missing?

  • Lawrence B. Crowell

    Entangled pairs can exist across event horizons. However, there is a loss of fidelity in teleportation. Alice and bob may share an entangled Bell state |B), and with it Alice may teleportate another state |Y) to Bob. So the two states are coupled to each other eg ~ |Y)|B) (I am using “)” because the carrot symbols have problems. Alice may then pass the |Y) state through a CNOT gate and pass the output state on |Y) through the four possible projections. Bob’s state will also be similarly rotated, but needs the classical signal from Alice to assertain the value of this output. In an ideal case this can be accomplished and the teleporated state to Bob is then correctly deduced. Instead of there being just 4 possible projections this can in principle be 2^N projections and the teleporation process can parallel process many states.

    BTW, in a quantum computer a CNOT can be arranged at simply with ions in a linear laser trap. If an ion is vibrating due to trapping phonon interactions with neighbors, the no pi pulse rotation is performed, but if the ion is not vibrating then a pi rotation (change of q-bit state) is performed. This was outlined by Cirac and Zoller over 10 years ago.

    Now if we introduce horizons into this picture things become more difficult. A classical case with the Rindler wedge is where an inertial observer is unable to transmit data to an accelerated observer in the region II outside the region I bounded by the particle horizon. In this case if Alice is the inertial observer she can’t send the classical result to Bob on the accelerated frame. So even though Bob has the teleporated state he is unable to read it without completely adulterating it without Alice’s classical key.

    You might win the lottery, but if you can’t find the ticket you can’t claim the winnings.

    In the case of cosmology we can observe regions as far back to the initial event (quantum tunnelling, bounce, D3-brane collision etc). The further back you look the further out that stuff is on the “current” Hubble frame. In fact it is so far out that we can’t observe it in its later state. We can only observe galaxies up to about .7billion years after the big bang. So at a later stage of evolution this stuff is not causally connected to us. Similarly we might see some galaxy cluster way way out there with a z ~ 7, but if it is beyond the cosmological horizon distance these bodies are being comoved away such that we could never send a signal back to them. With inflation and now the accelerated evolution of the universe local regions are being more rapidly isolated from each other in this way. So suppose that in the above galaxy we (Alice) happen to share an entangled pair with an observer there (Bob). So we make our projections on the state, and we try to send our signal to Bob. Bob never receives the information and so can never cipher the teleported state correctly according to this prescription.

    Bob is like Napolean, “Josephine (Alice) why don’t you ever write me?”

    Things also become complicated due to Hawking-Gibbon-Unruh radiation from horizons. This can further introduce noise into the communication channel, which can with appropriate quantum error correction codes can probably be managed if the Hamming distance in any interval of time (sampling time) is not terribly large. This gets into a whoe different kettle of fish for later.

    Lawrence B. Crowell

  • collin237

    Suppose an emitter sends out a signal
    cos a |B=0)|Y=0)+sin a |B=1)|Y=1)

    in two opposite directions. Each branch of the signal arrives at a detector. I would think the state would become
    cos a ( |B=0)|Y=0)|exists)+|B=1)|Y=1)|does not exist) )
    +e^iz sin a ( |B=0)|Y=0)|does not exist)+|B=1)|Y=1)|exists) )

    for some phase angle z.

    Then an observer at each detector would record a result, and they would meet in person to compare them. I would think they would find B=Y=0 with probability (cos a)^2 or B=Y=1 with probability (sin a)^2. The important point here is that the observers are the classical keys.

    I then consider a similar experiment in which you wait so long before sending the first signal that the detectors have receded from causal contact with each other.

    If the detectors actually do generate results, then in the first experiment, the results would have to both a classical 0 or both a classical 1. And if those results were never brought together (neither by the observers meeting nor by any other means), there would be no evidence for the “anomalous” event of classical results appearing without being observed.

    In the second experiment, however, each observer knows that, if the other observer is at his detector, he must see the same result. However, it is possible that one or both of the observers might not be watching the detector. The only necessary difference between the two expermients is that in the second the detectors themselves are out of causal connection.

    If free will exists, then a strictly Bohmian interpretation is untenable. All other interpretations, as far as I know, agree that if there are any hidden variables, at least until detection they are mixed in the same proportion as the physical states they accompany, so that nothing more definite than the probabilities exists before detection. So the detection of an eigenstate is the beginning of that eigenstate’s existence.

    Therefore, in the second experiment, the detection of an eigenstate would violate either causality (both the same without knowing what each other are) or the Born Rule (results B=!=Y possible, even though both have zero amplitude). The only conclusion is that in this case the detector does not display an eigenstate. Instead, it displays a non-ensemble mixture of the states. For example, a needle would break in half, or a screen would display two messages overlapping.

    It’s interesting to consider what the analogous result in terms of Hawking radiation would mean. Such a mixture could be described algebraically as a violation of the Completeness Principle, or semantically as a violation of the Axiom of Restricted Comprehension. If this exists, it would be an obvious candidate for Dark Matter.

  • Lawrence B. Crowell

    If I understand you (Collin237) right you appear to be confusing the classical information with some sort of “inner causal” connection.

    Suppose that you and I have a quantum state which are in an entangeled state. Say these states are due to the decay of a spin-0 boson, and which decay into two fermions. You and I are separated by some considerable distance. So we both then have this Bell state. Now suppose that I have another state |Y> = a|+) + b|-) that I want to teleport to you. So I then entangle this state with the Bell state so I have |Y)|B). By such an entanglement since you hold part of the EPR pair your state is similarly entangled. I then make a measurement of the state I want to teleport and find one of the four outcomes

    |+)|-) – |-)|+)

    |+)|-) + |-)|+)

    |+)|+) – |-)|-)

    |+)|+) + |-)|-)

    What I do is then communicate each of these possible outcomes with the numbers 1, 2, 3, 4 as classical information. If you then take your Bell state and perform the following operations:

    1: 0 rotation

    2: pi rotation about the z axis

    3: pi rotation about x axis

    4: pi rotation about y axis

    you will reconstruct the state I am intending to transmit because of its entanglement with your part of the EPR pair.

    The point of going through all of this is that without the classical information you will not beable to reconstruct the state properly. Further, the total information communicated involves 2 bits, but from that the state you find exists in the whole H^4 state space or points on the Bloch sphere.

    Lawrence B. Crowell

  • mathematician

    What is the definition of “classical information” in a quantum universe?

  • Lawrence B. Crowell

    We do have a bit of the classical/quantum split. Classical information are bits which are not defined according to quantum states, nor do they have any fundamental unit of action (hbar). One of the big questions underlying this is “why the classical world?” We all have a sense that the classical trajectory or orbit of even the biggest object is built up from quantum paths.

    Lawrence B. Crowell

  • collin237

    I then make a measurement of the state I want to teleport and find one of the four outcomes

    1. Why would you find mixtures like that?
    2. I assume “rotation” refers to an SU2xSU2 group. Is that correct?
    3. Are the x, y, and z axes only an isospin basis, or do they have actual directions in a physical frame?
    4. What is a zero rotation?

  • collin237

    It occurred to me that Sean’s “horror” at the title of Susskind’s book may not be that far off the mark. Not only does a black hole have a horizon that hides things from observers; it also has a force that destroys any observers that get too close to it. So the universe is made “safe” for the laws of physics, because the violations a black hole commits cannot be known about.

    Any attempt to deduce the structure of a black hole from the laws of physics is a total waste of time.

  • Lawrence B. Crowell

    To transmit information about the state you are teleporting you must communicate on the classical channel information about the the entanglement of the shared Bell state and the state to be transmitted. You then communicate the outcomes. The rotations performed by the recipient will rotate their state into a form which recovers the teleported state. This can be shown with rotations by Pauli matrices. It is not excessively difficult to do, but does require a page or two of calculation.

    Lawrence B. Crowell

  • Lawrence B. Crowell

    PS, by rotations with Pauli matrices I mean generators of the form

    $latex
    U(theta) = exp(isigma_atheta_a)
    $

    for the index around the axis to be rotated.

    Susskind’s black hole complementarity and holographic principle indicates that states which are identifiable on a membrane which wraps the black hole horizon are dual to states inside the black hole. This does suggest some deep connections with quantum gravity, for it the black hole becomes very small the horizon might become “blurred” by quantum fluctuations and the fields associated with the horizon and those in the interior will exhibit new physics.

    Lawrence B. Crowell

  • collin237

    Is the Holographic Principle a higher-dimensional version of Cauchy’s Integral from complex calculus?

  • mathematician

    I think the Holographic Principle is just a way for people to pretend they’re developing a profoundly deep understanding, when really they’re just superficially scratching the surface. 😉

  • Lawrence B. Crowell

    The holographic principle has connections with the S^5 ~ AdS duality in superstring theory. It on lower dimensions tells us that field amplitudes in three dimensional space at a given “time” are projections from fields pinned to event horizons.

    Lawrence B. Crowell

  • collin237

    Isn’t that just replacing the “inner causal” with an “outer causal”?

    And what validity does it have for a non-stringist like me?

  • Lawrence B. Crowell

    I am not sure what is meant by inner or outer causality.

    If the holographic theory is correct then metric fluctuations of spacetime should manifest themselves on a scale larger than the Planck length. Y. Jack Ng has shown that the fluctuations in a three dimensional volume are equated to those on a two dimensional bounding region by

    $latex
    (Big(delta L}{L}Big)^3~ge~Big(frac{L_p}{L}Big)^2, L_p~=~sqrt{Ghbar/c^3}
    $

    and so by solving for the delta L fluctuation these occur on a scale considerably larger than the tiny Planck length L_p.

    The AdS/CFT is largely a stringy result. However, aspects of string theory shows up in a number of forms — such as sphere packing and quantum codes. It also sneeks its way into Jordan exceptional algebras in loop quantum variables. So physics probably has both stringy and loopy aspects to it.

    Lawrence B. Crowell

  • Lawrence B. Crowell

    It looks like I messed up the tex on that equation. Ng’s fluctuation equation for a volume of length scale L bounded by an area is

    $latex
    Big(frac{delta L}{L}Big)^3~ge~Big(frac{L_p}{L}Big)^2
    $

    L. C.

  • collin237

    I mean how can an interior field be controlled from a surface it can’t classically communicate with?

    If the surface is the cosmological horizon, this is clearly not a moot point. Without a non-relativistic causality, it would mean everything we observe has already been decided billions of years ago.

  • Lawrence B. Crowell

    The black hole duality indicates that an exterior observer, at “infinity,” will see fields which have entered a black hole according to harmonic oscillator modes (or string vibrations) on a membrane a Planck unit length above the event horizon. The Russian for a black hole is a frozen hole, where because of time dilation nothing is ever seen to cross the r = 2M. Then as the black hole decays away these modes end up being radiated away in three space (or 4-dim spacetime). Conversely for an observer which enters the black hole their quantum information is not seen pinned to the event horizon, but taken in by the singularity, or some type of quantum-mawl in the interior.

    This is the matter of black hole complementarity, which is that the observers outside and inside a black hole observe the same field amplitudes, but in complementary forms. Loosely put, the exterior viewpoint has fields on this membrane above the horizon, similar to a type of D2-brane, equivalent to fields in space removed from the black hole. So there is a 3 + 2 dimensional perspective on fields as measured outside the black hole. The interior is given by a D5-brane, and the duality says that the amplitudes on the two are equivalent.

    Lawrence B. Crowell

  • collin237

    By “3+2 dimensional” do you mean that the two interpretations (internal and external) are interpolated by a parameter that behaves like an extra time-like dimension? (As if the internal and external observers are looking at the surfaces of bread surrounding a 5-dimensional sandwich?)

  • Lawrence B. Crowell

    It gets a little deeper than this. The fields on a three dimensional space, or space plus time in four dimensions are determined by fields which are on a slice within a two dimensional null congurence. This relationship between fields in this manner involves some subtle matters of Lagrangians on a full space and the Chern-Simons Lagrangian on a subchain or cycle. This can lead to ways of constructing a “mass” for black holes from gauge charges, similar to BPS black holes. In this way it leads to a 3 + 2 dimensions which is dual to the CFT on S^5.

    Lawrence B. Crowell

  • Pingback: Where Does the Entropy Go? | Cosmic Variance | Discover Magazine()

  • Pingback: String Theory and the Multiverse - Christian Forums()

  • Darrel Lancaster

    Theoretical arguments are great, and allow for the completely improbable to become possible. The argument over how information could be bombarded at the event horizon, and the supposition that nothing can travel faster than the speed of light…what if it can, and that is why we see a black hole, as there is nothing being reflected.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+