Guest Post: David Wallace on the Physicality of the Quantum State

By Sean Carroll | November 18, 2011 9:46 am

The question of the day seems to be, “Is the wave function real/physical, or is it merely a way to calculate probabilities?” This issue plays a big role in Tom Banks’s guest post (he’s on the “useful but not real” side), and there is an interesting new paper by Pusey, Barrett, and Rudolph that claims to demonstrate that you can’t simply treat the quantum state as a probability calculator. I haven’t gone through the paper yet, but it’s getting positive reviews. I’m a “realist” myself, as I think the best definition of “real” is “plays a crucial role in a successful model of reality,” and the quantum wave function certainly qualifies.

To help understand the lay of the land, we’re very happy to host this guest post by David Wallace, a philosopher of science at Oxford. David has been one of the leaders in trying to make sense of the many-worlds interpretation of quantum mechanics, in particular the knotty problem of how to get the Born rule (“the wave function squared is the probability”) out of the this formalism. He was also a participant at our recent time conference, and the co-star of one of the videos I posted. He’s a very clear writer, and I think interested parties will get a lot out of reading this.

———————————-

Why the quantum state isn’t (straightforwardly) probabilistic

In quantum mechanics, we routinely talk about so-called “superposition states” – both at the microscopic level (“the state of the electron is a superposition of spin-up and spin-down”) and, at least in foundations of physics, at the macroscopic level (“the state of Schrodinger’s cat is a superposition of alive and dead”). Rather a large fraction of the “problem of measurement” is the problem of making sense of these superposition states, and there are basically two views. On the first (“state as physical”), the state of a physical system tells us what that system is actually, physically, like, and from that point of view, Schrodinger’s cat is seriously weird. What does it even mean to say that the cat is both alive and dead? And, if cats can be alive and dead at the same time, how come when we look at them we only see definitely-alive cats or definitely-dead cats? We can try to answer the second question by invoking some mysterious new dynamical process – a “collapse of the wave function” whereby the act of looking at half-alive, half-dead cats magically causes them to jump into alive-cat or dead-cat states – but a physical process which depends for its action on “observations”, “measurements”, even “consciousness”, doesn’t seem scientifically reputable. So people who accept the “state-as-physical” view are generally led either to try to make sense of quantum theory without collapses (that leads you to something like Everett’s many-worlds theory), or to modify or augment quantum theory so as to replace it with something scientifically less problematic.

On the second view, (“state as probability”), Schrodinger’s cat is totally unmysterious. When we say “the state of the cat is half alive, half dead”, on this view we just mean “it has a 50% probability of being alive and a 50% probability of being dead”. And the so-called collapse of the wavefunction just corresponds to us looking and finding out which it is. From this point of view, to say that the cat is in a superposition of alive and dead is no more mysterious than to say that Sean is 50% likely to be in his office and 50% likely to be at a conference.

Now, to be sure, probability is a bit philosophically mysterious. It’s not uncontroversial what it means to say that something is 50% likely to be the case. But we have a number of ways of making sense of it, and for all of them, the cat stays unmysterious. For instance, perhaps we mean that if we run the experiment many times (good luck getting that one past PETA), we’ll find that half the cats live, and half of them die. (This is the Frequentist view.) Or perhaps we mean that we, personally, know that that the cat is alive or dead but we don’t know which, and the 50% is a way of quantifying our lack of knowledge. (This is the Bayesian view.) But on either view, the weirdness of the cat still goes away.

So, it’s awfully tempting to say that we should just adopt the “state-as-probability” view, and thus get rid of the quantum weirdness. But This doesn’t work, for just as the “state-as-physical” view struggles to make sense of macroscopic superpositions, so the “state-as-probability” view founders on microscopic superpositions.

Consider, for instance, a very simple interference experiment. We split a laser beam into two beams (Beam 1 and Beam 2, say) with a half-silvered mirror. We bring the beams back together at another such mirror and allow them to interfere. The resultant light ends up being split between (say) Output Path A and Output Path B, and we see how much light ends up at each. It’s well known that we can tune the two beams to get any result we like – all the light at A, all of it at B, or anything in between. It’s also well known that if we block one of the beams, we always get the same result – half the light at A, half the light at B. And finally, it’s well known that these results persist even if we turn the laser so far down that only one photon passes through at a time.

According to quantum mechanics, we should represent the state of each photon, as it passes through the system, as a superposition of “photon in Beam 1″ and “Photon in Beam 2″. According to the “state as physical” view, this is just a strange kind of non-local state a photon is. But on the “state as probability” view, it seems to be shorthand for “the photon is either in beam 1 or beam 2, with equal probability of each”. And that can’t be correct. For if the photon is in beam 1 (and so, according to quantum physics, described by a non-superposition state, or at least not by a superposition of beam states) we know we get result A half the time, result B half the time. And if the photon is in beam 2, we also know that we get result A half the time, result B half the time. So whichever beam it’s in, we should get result A half the time and result B half the time. And of course, we don’t. So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the “state-as-probability” rule.

Indeed, we seem to be able to see, pretty directly, that something goes down each beam. If I insert an appropriate phase factor into one of the beams – either one of the beams – I can change things from “every photon ends up at A” to “every photon ends up at B”. In other words, things happening to either beam affect physical outcomes. It’s hard at best to see how to make sense of this unless both beams are being probed by physical “stuff” on every run of the experiment. That seems pretty definitively to support the idea that the superposition is somehow physical.

There’s an interesting way of getting around the problem. We could just say that my “elementary reasoning” doesn’t actually apply to quantum theory – it’s a holdover of old, bad, classical ways of thinking about the world. We might, for instance, say that the kind of either-this-thing-happens-or-that-thing-does reasoning I was using above isn’t applicable to quantum systems. (Tom Banks, in his post, says pretty much exactly this.)

There are various ways of saying what’s problematic with this, but here’s a simple one. To make this kind of claim is to say that the “probabilities” of quantum theory don’t obey all of the rules of probability. But in that case, what makes us think that they are probabilities? They can’t be relative frequencies, for instance: it can’t be that 50% of the photons go down the left branch and 50% go down the right branch. Nor can they quantify our ignorance of which branch it goes down – because we don’t need to know which branch it goes down to know what it will do next. So to call the numbers in the superposition “probabilities” is question-begging. Better to give them their own name, and fortunately, quantum mechanics has already given us a name: amplitudes.

But once we make this move, we’ve lost everything distinctive about the “state-as-probability” view. Everyone agrees that according to quantum theory, the photon has some amplitude of being in beam A and some amplitude of being in beam B (and, indeed, that the cat has some amplitude of being alive and some amplitude of being dead); the question is, what does that mean? The “state-as-probability” view was supposed to answer, simply: it means that we don’t know everything about the photon’s (or the cat’s) state; but that now seems to have been lost. And the earlier argument that something goes down both beams remains unscathed.

Now, I’ve considered only the most straightforward kind of state-as-probability view you can think of – a view which I think is pretty decisively refuted by the facts. It’s possible to imagine subtler probabilistic theories – maybe the quantum state isn’t about the probabilities of each term in the superposition, but it’s still about the probabilities of something. But people’s expectations have generally been that the ubiquity of interference effects makes that hard to sustain, and a succession of mathematical results – from classic results like the Bell-Kochen-Specker theorem, to cutting-edge results like the recent theorem by Pusey, Barrett and Rudolph – have supported that expectation.

In fact, only one currently-discussed state-as-probability theory seems even half-way viable: the probabilities aren’t the probability of anything objective, they’re just the probabilities of measurement outcomes. Quantum theory, in other words, isn’t a theory that tells us about the world: it’s just a tool to predict the results of experiment. Views like this – which philosophers call instrumentalist – are often adopted as fall-back positions by physicists defending state-as-probability takes on quantum mechanics: Tom Banks, for instance, does exactly this in the last paragraph of his blog entry.

There’s nothing particularly quantum-mechanical about instrumentalism. It has a long and rather sorry philosophical history: most contemporary philosophers of science regard it as fairly conclusively refuted. But I think it’s easier to see what’s wrong with it just by noticing that real science just isn’t like this. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils, astrophysicists talk about stars so they can understand photoplates, virologists talk about viruses so they can understand NMR instruments, and particle physicists talk about the Higgs Boson so they can understand the LHC. In each case, it’s quite clear that instrumentalism is the wrong way around. Science is not “about” experiments; science is about the world, and experiments are part of its toolkit.

CATEGORIZED UNDER: Guest Post, Philosophy, Science, Top Posts
  • Frank Martin DiMeglio

    Our very thoughts are ultimately, to a limited extent of course, integrated and interactive with sensory experience (including gravity, inertia, and electromagnetism). Any complete and accurate explanation/description of physical phenomena/sensory experience — including vision — (at bottom) has to address this; and with this, of course, the completeness of thoughtful description, as it is related thereto, follows/pertains. The experience of the body cannot be avoided in any final/true/fundamental explanation, description, or examination of physics/sensory experience and thought/genius.

  • Ellipsis

    There was a nice Nature article about this a few months ago, with actual experimental evidence for the realist perspective:

    http://www.nature.com/nature/journal/v474/n7350/full/nature10120.html

  • Doubter

    Thank you! Thank you! Thank you! You have crystallized the vague discomfort that I had with Tom Banks’ essay. The difference between amplitude and probability distribution is at the root of why all the classical analogies don’t work.

    I’ll note that anyone who has bet on horse racing has an idea of probability rescaling, and that is not the mystery. The mystery arises because quantum horse races are represented by amplitudes.

  • Pingback: Interpretation of Quantum Mechanics in the News | Quantum Mechanics Blog()

  • Ernest

    science is not about experiments? well science is a method and one of the steps in the method is experiments is not it? so experiment is an important part of it.

    not sure what to think about this wavefunction thing. For instance this morning i was working from home, suddenly i started hearing a noise in the house, turned out it was an old alarm setup to ring at noon. Since I usually am at work I never turned it off. should i have gone to the office the alarm would keep going on an on. But then i started thinking, if no body would be home the alarm would really trigger or was it my presence that triggered it?. That got me thinking, according to quantum mechanics The alarm would be in a superposition state triggering and not triggering at the same time. However I happened to listen the alarm. The thing is I had not intention to listen it (aka measure it).

    looks like the alarm superposition state is independent and exist and is always triggering and not triggering and it is only my sensorial limitation of the world that makes me see it in one way or another. in other words it seems the probability applied to me not the alarm itself.

  • anon

    the way you describe instumentalism, there is no such thing as a theorist.

  • Kevin

    Please forgive me if I offend with my naiveness (naivete?):

    “So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the ‘state-as-probability’ rule.”

    Can one rule in (or rule out) such a thing based on words alone? I think I understand the appeal of using ‘elementary reasoning’ to properly understand certain things. But my understanding is that QM can be understood properly only in the language of mathematics, and so ruling something in or out regarding QM must make use of the math.

    Kevin

  • CSCO

    My thought when reading this is to think of what we mean when we say photon. We have to think of photons in terms of events.

    A change of color at a point in some photopaper is an event we might associate with a photon. Why we call it a photon is because we have observed that under certain circumstances, we can finitely categorize certain types of events and associate them with things we call particles. For instance, we associate certain tracks in a bubble chamber with certain types of particles. We can do this because we recognize very distinct types of patterns.

    In practicle terms, this is no different from ones ability to identify and classify certain types of moves performed by an iceskater or dancer, or even certain plays executed by a football team on the field. None of these descriptions have meaning outside the context of human interpretation. Whether the patterns exist and obey some abstract relational ruleset is interesting, but our choice of description is completely arbitrary.

    In any case, in physics we can observe direct cause and effect relationships between certain events that lead to certain patterns. The experimentalist view plays on this notion that what is evolving over time is not a physical thing, but rather some ability to predict the outcome of a causal chain. We sometimes might define something like the scattering matrix to keep all of the possible outcomes organized, and we might place some amplitude into each block in order to identify the relative likelyhood of a particular pattern emerging after a particular event. However, we have to keep in mind that it is ultimately ourselves that have constructed such convenient mnemonics for our own purposes and not nature for its purposes.

    If we want to get to what is “real” we have to think of things abstractly, much in the way that we think of groups as abstract entities, where it is the relationship between elements that define the group.

    So our determination of amplitude is a determination of the strength of the relationship between elements of what we think of as reality. Those elements have certain finite quantities that determine the shape of our more continuous view of the world. Its the relationship between possible outcomes that we want to keep track of. So when we think of the cat, we want to think of how the relationship between the outcomes evolve. just as we want to think about how the relationship of two entangled photons evolve as they go off to distant corners, and not about how the photons evolve.

    So in this sense, the experimentalist view might seem weak, but it actually is quite strong, because it recognizes that what we are interested in is not the physical object but the relationship between physical objects. That relationship is the hallmark of reality and not vice versa.

  • David Brown

    Is quantum theory with the infinite nature hypothesis actually the most fundamental theory of nature? Consider some of Edward Fredkin’s ideas on nature:
    (1) At some scale, space, time, and energy are discrete.
    (2) The number of possible states for every unit volume of space-time is finite.
    (3) There are no infinities, infinitesmals, or locally generated random variables.
    (4) The fundamental process of nature must be a simple deterministic digital process.
    http://en.wikipedia.org/wiki/Edward_Fredkin
    Consider 3 fundamental hypotheses:
    (1) Nature is finite and digital with Fredkin-Wolfram information underlying quantum information.
    (2) There are a finite number of alternate universes, and this number can be calculated using modified M-theory with Wolfram’s mobile automaton.
    (3) The maximum physical wavelength equals the Planck length times the Fredkin-Wolfram constant.
    http://en.wikipedia.org/wiki/Stephen_Wolfram

  • pranjal borthakur

    sir shrodinger wave equation. I nvr understd..

  • Low Math, Meekly Interacting

    But photons aren’t anything like dinosaurs, so I fail to see why that or any of the other examples given are even relevant to the instrumentalist interpretation of quantum mechanics. I fail to see why agnosticism about processes or states (e.g. superpositions) that cannot be observed in the quantum realm (as opposed to the paleontological realm) are so untenable.

  • Chris

    #11: Dinosaurs are exactly like (several) photons. That’s the whole point of physics. If you just impose a distinction between dinosaurs and photons you’re back to the Copenhagen interpretation. Which, for all its flaws, is at least honest about putting in a sharp cutoff in this way.

  • John Merryman

    What if light really just expands out from its source and particles and waves are effects of disturbances to this light? Quanta seem to be the amount absorbed by atoms, ie. the basis of mass, so when encountering mass, light collapses to the point of being absorbed by individual atoms. On the other hand, when passing around things, such as going through those slits, it is disturbed, causing fluctuations and ripples, thus creating the impression of waves, but it isn’t that light is waves in some medium, but that light is the medium and waves are the disturbance of this medium that are necessary to our observing it.
    As for probabilities, when we think of the passage of time, it is from past events to future ones, but the process creating time is of continual change and action, such that it is the events which go from being in the future to being in the past. A future cat has the probability of being either dead or alive, but it is the actual event of its potential demise that determines its health. Just as the actual event of our observing its state determines our level of knowledge. Consider the horse race; Prior to its occurrence, there are multiple possible winners, but it is the actual events of the race which determines just which one really does win. It’s not that we travel the fourth dimension from yesterday to tomorrow, but that tomorrow becomes yesterday, because lots of things occur, most especially the rotation of the planet.

  • Low Math, Meekly Interacting

    Part of the problem is that dinosaurs don’t appear to behave at all like photons in a double-slit experiment, and we will never, ever be able to observe a dinosaur interfering with itself. Isn’t that fact part of the interpretive debate? That we perceive superpositions, only infer their existence, and then almost exclusively from results gleaned from of exquisitely sensitive apparati harboring isolated microscopic objects? Doesn’t that lead us right to the measurement problem?

    I actually find decoherence (to the extent I understand it) the most satisfying, but anyone who claims such a preference is eventually confronted by someone who insists you acknowledge the existence of all the other possibilities out there somewhere, and maybe asks you to take a bet that involves some version of you on a decoherent branch of the wavefunction winding up deceased. I guess I prefer agnosticism to all that.

  • Matt

    I’m a practicing theoretical physicist, and I don’t understand all the confusion — please someone explain it to me.

    We already have a natural object in QM with a statistical interpretation, namely, the density matrix. And density matrices are the natural generalization of classical probability distributions. In classical mechanics, the probability distribution is over classical states, and in quantum mechanics, the density matrix probability eigenvalues are a distribution over quantum state vectors.

    If we take the view that a density matrice’s eigenvalues are a probability distribution over its eigenvectors, and regard those eigenvectors (which are state vectors) as real, physical possible states (just like we treat classical states underlying a classical probability distribution as real, physical possible states), then we never run into contradiction with observation. So what’s stopping us from taking that point of view?

    To say that state vectors themselves are statistical objects is to say that there are two levels of probability in quantum mechanics. But why give up the parsimony of having only one level of probability in QM if it’s not needed? And it’s not!

    When you make use of decoherence properly, you see that all probabilities after measurements always end up arising through density matrix eigenvalues automatically. And you automatically find that for macroscopic objects in contact with a realistic environment, the density-matrix eigenbasis is essentially always highly-classical-looking with approximately-well-defined properties for all classical observables — all this comes out automatically.

    So there’s no reason to insist on regarding state vectors as statistical objects. We can regard them as being as real and physical as classical states, even though for isolated, microscopic systems, they don’t always have well-defined properties for all naive classical observables — but why should weirdness for microscopic systems be viewed as at all contradictory? And again, in particular, the probabilities end up as density-matrix eigenvalues anyway, so why bother insisting on having a second level of probability at all?

    As for Schrodinger’s cat, the fact is that any realistic cat inside a reasonable-size non-vacuum environment is never going to stay in a weird macroscopic superposition of alive and dead for more than a sub-nanosecond, if that — its density matrix will rapidly decohere to classicality. The only way to maintain a cat (with its exponentially huge Hilbert space) in an alive/dead superposition for an observable amount of time is to place the cat in a near-vacuum at near-absolute zero, but then you can be sure it’s going to be dead.

    What about putting it in a perfectly-sealed box in outer space? Well, even in intergalactic space, the CMB causes a dust particle to decohere to classicality in far less than a microsecond. So it just doesn’t happen in everyday life — and if that’s the case, then why are we worried that it should seem counterintuitive?

    So the whole Schrodinger-cat paradox is a complete unphysical fiction and a red herring, unless you do it with an atom-sized Schrodinger-kitten — and that’s been done experimentally!

  • Tom Banks

    There are several interesting things to note about David Wallace’s post, but let me first deal with his contention that QM probabilities should not be thought of as probabilities. Let me first talk about observations at a fixed time

    For every quantum state and every Hermitian operator, the math of QM allows you to calculate tr D A^n where D is the density matrix corresponding to the state, A is the operator and n is an arbitrary integer. From these quantities you can extract a bunch of positive numbers, summing to one, which QM claims are the probabilities for this operator to take on each of its possible values.
    The interpretation of this math is that if you prepare the system repeatedly, in the state D, and then measure A by coupling to a macroscopic system whose pointer points to a different value for each of the different eigenstates of A, then the frequency of observation of a particular value will be equal to the positive number extracted from the calculation. These predictions have, of course, been tested extensively, and work.

    Many of the things one normally talks about in these discussions involve probabilities of HISTORIES rather than of observations at a fixed time. I can’t go into the details but GellMann and Hartle, in a series of beautiful papers, have shown that a similar sort of interpretation of mathematical quantities in QM in terms of probabilities of histories as valid when “the histories decohere”. The reason histories are more complicated is that they involve measurements of quantities at different times, which don’t commute with each other.
    However, it’s important to note that GellMann and Hartle use only the standard math of QM (I’m not talking about their attempt to generalize the formalism) and that their interpretation follows from the probability interpretation at fixed time by rigorous mathematical reasoning.

    Given that the math suggests a probabilistic interpretation of fixed time expectation values, which actually reproduces experimental frequencies, I can’t understand a statement that I shouldn’t interpret these things as probabilities just because they don’t satisfy some a priori rule that a philosopher derives from “pure thought” or “elementary reasoning”. The whole point of my post is that “elementary reasoning” is flawed because our brains are not subtle enough. The rigorous mathematical formulation of “elementary reasoning” is mathematical logic, and I think it’s quite interesting, (and IMHO the most interesting thing in this rather stale discussion), that that formalism contains the seeds of its own destruction.

    The other interesting thing about David’s post is that it points up the drastic difference between the modes of thought of theoretical physicists and philosophers.

    Theoretical physics has three parts. It begins with the assumption that there’s a REAL WORLD out there, not just activity going on in our consciousness, and that the only way to access that real world is by doing experiments. I mean experiment in the most general sense of the term. A macroscopic trace left on a distant asteroid in the Andromeda galaxy is an experiment for an observer on Earth, if it is in principle possible for some advanced civilization in the distant future to send out a spacecraft or bounce a beam of light off the
    asteroid to bring back information of that trace.

    The second part of theoretical physics is mathematics.
    We build a mathematical model of the world and compare it to experiment according to some well defined rules. In QM these rules are: calculate the probabilities of different events using the density matrix formula, and compare those probabilities to frequencies in repeated experiments on the same quantum state.
    In quantum cosmology, a subject which is still under construction, we’ll never be able to repeat the experiment so we have to use the more subjective meaning of probability.

    Finally, there’s the story we tell about what these results mean and how they relate to our intuition. It’s a very important part of the whole game, but we’ve learned something very interesting over the years. The story can change drastically into another story that seems inconsistent with the first one, even when the math and the experiments change in a controlled way, whose essence is contained in the word “approximation”. In math an exponential function can be approximated by a linear one when its argument is small enough. In experiment an exponential behavior can look linear when we don’t measure large values of the control parameter. That is, these two features of our framework can change and they change together in a controlled way, with a quantitative measure of the error of approximation.

    Stories however, change in a drastic manner. Newton’s absolute space, and absolute time, Galileo’s velocity addition rule, etc. , if they’re taken as a priori intuitively obvious laws, simply do not admit the possibility of relativity. When I try to explain relativity to laymen, many of them have a hard time understanding how it could be possible that you could run to catch up with something and still have it recede at the same speed.
    It’s easy for someone with a little math background to understand that the correct law of addition of rapidities for parallel velocities, becomes the velocity addition law when an exponential of rapidity is replaced by a linear approximation.

    Philosophers are committed to understanding everything in terms of the story, in terms of words. This approach can work even for relativity, whenever classical logic works. Whatever philosophers call it: “elementary reasoning” , “common sense”, they’re committed to a world view based on a denial of the essence of QM, because QM is precisely the abandonment of classical logic in terms of the more general, inevitably probabilistic, formalism, which becomes evident when one formulates logic in a mathematical way. Just as we use the math of the exponential function to explain the reason that the velocity addition law looks so obvious, we use the math of decoherence theory to explain why it is that using “elementary reasoning” is a flawed strategy.

    Let me end by recommending to you some words of Francis Bacon, which were quoted in an article by Stanley Fish in the NYT some years ago. Unfortunately, I’ve misplaced the computer file with my copy. Bacon, writing at the dawn of experimental science, complained about the ability of men to twist the meaning of words, which made getting the truth by argument impossible. He argued that the only way to get to actual truth was to do reproducible experiments, which could be done independently by different researchers, in order to assess their validity. Modern theoretical physics has another arrow in its quiver, which is mathematical rigor.
    Words, the Story of theoretical physics, are still important, but they should not be allowed to trump the solid foundations of the subject and create unjustified confusion and suspicion of error where none exists.
    It’s sad, but the architecture of our consciousness probably will not allow us to come up with an intuitive understanding of microscopic quantum processes. But mathematics gives us a very efficient way of describing it with incredible precision. To me, there’s every indication that QM is a precise and exact theory of everything, and will survive the incorporation of gravitational interactions, which has so far eluded us apart from certain mathematical models (the theory formerly known as String) with only a passing resemblance to the Real World. Attempts to force it back into the straitjacket of “simple reasoning” are misguided and have not led (and IMHO will not lead) to advances in real physics.

    There are a number of other posts I’d like to respond to, particularly in reference to the Bohm de Broglie Vigier theory and the GRW theory, but I have to admit I’m blogged out. A lot of people have written really incisive things (mostly, of course, supporting my point of view :))
    and a few others have written silly things that I’m tempted to respond to (I certainly don’t consider the things I’ve actually responded to to be silly), but I have to get back to my real research and my life. Does anyone know how to fix LaTeX files so the new arXiv.org robots will accept them?

  • Roman

    For me as an outsider, there is something circular in the position of “probabilistic” view:
    – we can’t make sense of QM based on our intuitions of “reality” because our brains are not equipped for (or not train by evolution for) this
    – we should believe what math is saying and not try to interpret that trough our intuitions of “reality”
    – math was invented (discovered?) by human brain so it seams that our brains are equipped for it or were trained enough for it. We come up with something because we can. If math is platonic and we are only discovering it bit by bit (so we can not say that it is comprehensible for us – may be there are parts of it out there that are not) there is a part of “reality” that is non-probabilistic (math) after all.
    – so it seams that we can comprehend math but not what it is saying it terms of our intuition of “reality”. Shouldn’t math be part of this intuition?

  • Doubter

    According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils

    Perhaps David Wallace can kindly spend a few words explaining what would be different if we understood dinosaurs and earth’s long geological history to be concepts created to understand fossils, instead of being as real as what we remember in personal experience. Apart from psychological discomfort, that is.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Let me comment a bit on what David seems to be saying. I’m sure he can defend himself quite capably, but a lot of people seem to be misreading and missing the point pretty badly.

    Everyone in the room more or less agrees on the experimental predictions of quantum mechanics, and how to obtain them; the only issue in dispute is the philosophical one about whether it’s better to think of the quantum state as real/ontic/physical or simply a tool for calculating probabilities. It’s certainly legitimate to not care about this issue, but I think it’s important for physics as well as philosophy, as it helps guide us as we try to push beyond our current understanding.

    Tom’s complaints about “telling stories” and “elementary reasoning” seem to be very much beside the point. The point is that David gave us a particular piece of elementary reasoning, meant to illustrate why it is useful to think of the wave function as really existing, rather than just a crutch for calculating probabilities. It’s okay to take issue with that bit of reasoning, but not with the very idea of using reasoning to understand your theory. Neither David nor anyone else is making appeals to “common sense” or “pure reason,” so that’s a pretty transparent straw man.

    Of course quantum probabilities are probabilities. But the quantum wave function is not a probability — it’s an amplitude. David’s point (or my understanding thereof) is that the wave function serves the same role in explaining where the photon hits the plate as dinosaurs serve in explaining where fossils come from — namely, you can’t do without it. It’s a crucial part of our best explanation, and therefore deserves to be called “real” (or “physical,” if you want to be a bit more precise) by any sensible criterion. You can’t make sense of the outcome of an experiment without believing that something really goes down the different paths of the interferometer.

    There’s no important difference here in the purposes or methods of scientists and philosophers — we’re all trying to understand nature by fitting sensible models/theories/stories (whatever you want to call them) to the empirical data. Again, it’s okay not to care, but the empirical data seem to indicate that people do care, since they can’t stop talking about it (including repeatedly insisting that they don’t care).

    I completely agree that the more interesting part of Tom’s original post was the claim about the inevitability of QM (whether one agrees with that point or not). But if you start discussing that and end by saying that the wave function isn’t real and appreciating this fact answers all the interpretational puzzles of quantum mechanics, you can’t be surprised that the former discussion generates little response. :)

  • Moshe

    As I wrote in the other thread, I wish people who advocate the “realist” stance could be a little more precise on what they think this term might mean in this context. It is clear from the exchanges here that there are different versions of that term, and consequently the wave function might or might not “exist” depending precisely what you mean by that term. Until this notion is formalized, or at least clarified, including an organized discussion on whether or not it assumes classicality, I tend to agree that this seems like an attempt to convey precise and properly formulated statements in a less precise language.. That this would lead to confusion is not all that surprising.

  • Jens

    Philosophical questions of this sort are very interesting. Still, direct physical questions are perhaps more important. I have such a question, though it is somewhat unrelated: Can photons attain any frequency?

    My question isn’t really about lower and upper bounds, but let’s start there.

    Upper bound: Since the energy of a photon is E = hv and since the Universe presumably doesn’t have infinite energy, there clearly must be an upper bound on a photon’s frequency. Do we know what this bound is?

    Lower bound: I have no information to guide me here except to say that a photon with a frequency of 0 probably can’t be considered a photon. Do we have any better lower bound?

    And now to my real question. A photon can be created by an electron in an atom falling from a high energy state to a low energy state. But these states and their energy levels are discrete and therefore no matter how many types of atoms or molecules you have, the photons thus released can only cover a finite, discrete range of frequencies.

    OK, but there are other ways to make photons. Such as nuclear reactions. I’m not too famaliar with these reactions, so can you help me here as to whether they can produce a photon of any desired frequency?

    Lastly we have the expansion of the Universe. Photons sent off when the Universe was young have had their frequency reduced due to spacetime expansion. Does this mean these photon’s frequency, as recieved here on Earth, now cover all infinite types of frequencies? I personally don’t see how.

    Here’s hoping you will answer my question! :-)

  • chemicalscum

    As a chemist I was pleased to see Tom Banks in his guest post use a chemical QM example. Working in industry sometimes I use QM calculations to try to understand chemical experiments. However I think everyone in this discussion is trying to avoid the elephant in the room:

    Tom Banks Says:
    “Many of the things one normally talks about in these discussions involve probabilities of HISTORIES rather than of observations at a fixed time. I can’t go into the details but GellMann and Hartle, in a series of beautiful papers, have shown that a similar sort of interpretation of mathematical quantities in QM in terms of probabilities of histories as valid when “the histories decohere”.

    Matt Says:
    “If we take the view that a density matrice’s eigenvalues are a probability distribution over its eigenvectors, and regard those eigenvectors (which are state vectors) as real, physical possible states (just like we treat classical states underlying a classical probability distribution as real, physical possible states), then we never run into contradiction with observation.”

    Matthew F. Pusey et al (http://arXiv:1111.3328v1) Say:
    “In some versions of quantum theory, on the other hand, there is no collapse of the quantum state. In this case, after a measurement takes place, the joint quantum state of the system and measuring apparatus will contain a component corresponding to each possible macroscopic measurement outcome. This is unproblematic if the quantum state merely reflects a lack of information about which outcome occurred. But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each macroscopically different component has a direct counterpart in reality.”

    Sean Says:
    “But the quantum wave function is not a probability — it’s an amplitude. David’s point (or my understanding thereof) is that the wave function serves the same role in explaining where the photon hits the plate as dinosaurs serve in explaining where fossils come from — namely, you can’t do without it. It’s a crucial part of our best explanation, and therefore deserves to be called “real” (or “physical,” if you want to be a bit more precise) by any sensible criterion. You can’t make sense of the outcome of an experiment without believing that something really goes down the different paths of the interferometer.”

    The elephant in the room is of course that if the state vector is real then the state vector + decoherence gives rise to a set of HISTORIES (Tom Banks’ capitalization) that are all equally real or as Pusey puts it “each macroscopically different component has a direct counterpart in reality.” The Everett many-worlds interpretation is alive and kicking even if Tom Banks doesn’t like it.

  • Nei1 13ates

    I think Tim Maudlin’s comment #6 at related “Tom Banks” thread was very apt, that loss of interference does not really explain why we don’t see superpositions of both states (I would say, messy superpositions then, not “one outcome”) IOW, to me, the degree of interference determines what *kind* of statistics we see, with their very existence being derived elsewhere. Interference does not typically determine *whether* we find statistics and single outcomes of one of the superpositions. Note the irony, that preserved interference as in double slit *does* produce statistics that models the interfering waves, it does not lead to literal continuing wave amplitudes or our finding one photon multiply realized all over the screen! (BTW, note also that in MWI we really are violating conservation laws. No matter the excuse why we “can’t see” (to quote that sloppy phrase) the other instantiations of a single (!) particle, MWI deviates (despite the enthusiasts’ sloppy protestations) from genuine Schroedinger evolution as soon as the sum total of mass-energy claimed by all “observers” exceeds the original value.
    http://tyrannogenious.blogspot.com

  • Doubter

    Moshe, here’s my try. In the two-slit experiment with photons, we have no difficulty in saying that the two slits exist. Does the photon wave function exist in the same sense as whatever is providing the two slits? Or are both the photon wavefunction (and whatever provides the two slits) merely computational devices? If the photon wavefunction is a computational device and the two slits – ultimately composed of quantum objects – are real, how does the transition for computational device to reality take place?

  • Chris

    Matt #15: but decoherence DOESN’T turn a superposed cat into a dead or alive cat with certain probabilities, that’s wavefunction collapse you’re talking about. Decoherence turns a superposed cat into a superposed cat that we can no longer do interference experiments with.
    Without any additional ingredients the linearity of the wave equation guarantees that everything stays equally superposed forever.

    This way you end up at the many-worlds interpretation, which has been terribly mis-served by its marketing. I never liked it because of the wasteful proliferation of universes, and the arbitrary rules about when you do and don’t branch. Except, those are concerns brought on by terrible popular descriptions (not helped by its name). There’s only one universe, with all the different possibilities superposed but non-interfering (use of decoherence explains why that is). I’m still not sure I’m comfortable with it (more of an objective collapse guy) but at least, when described properly, it’s minimal and self-consistent.

  • Nei1 13ates

    Chris, are you aware of the critiques of MWI etc. such as I gave above? It disappoints me that you started off IMHO on the right track in your first paragraph, then accepted the decoherence explanation of “why we don’t see” the different possibilities together. Well, you already said it: decoherence just produces a superposed cat you can’t do interference experiments with. So what? I should still “find” both, just not the sort of pattern that constitutes evident “interference.”

    Amplitudes still combine and add up, they just wouldn’t make certain pretty distributions that prove interference in the particular case. Without some “intervention” the waves just “are” and no more diverted into statistics or isolation of components than in classical EM. And if they “aren’t waves” – then what are they in transit, making the interference?

    Either we can literally detect the amplitude themselves, in which case we just find messier distributions after loss of interference, or: there is some breakdown by something (I think, maybe atoms “grab” energy or particles just do what they do in a probabilistic manner, once interactions get going) to get the “statistics.” Remember, the “statistics” have to be explained, they can’t go in by circular argument/definition as per the density matrix and the confused idea @15 that things are just fine as is.

  • Chris

    What would such a cat look like? You’re thinking of, like, two partially transparent cats overlayed?
    What actually happens (in this picture) is that you end up with two states of your brain, one that’s seen a live cat and one dead, superposed.
    Perhaps the mystery is supposed to be why you can’t feel that. But, on the basis that consciousness isn’t magic (no matter how much it currently seems like it), and is somehow constituted out of physical stuff, don’t you just expect to have two different thought processes/feelings overlapped but non-interacting? There’s no way to prove to yourself that you’re thinking two contradictory things, because you can’t do any interference experiment.

    Criticism of conservation laws: there’s still only one wavefunction, still correctly normalized. As more and more things happen, the weight of any one particular combination gets diluted further and further, but it’s not like this actually causes any problems.
    Besides, what if there was violation? We just made the conservation laws up, they don’t have to be fundamental.

  • Ray Gedaly

    If the concept of a holographic universe actually describes the real world, then all matter would exist as two-dimensional spread-out overlapping and interfering wave patterns. Matter would appear to be three-dimensional localized particles only when we observe it. Interesting how much this sounds like quantum behavior.

    Is there a higher dimensional analog? Could reality exist as a three-spatial-dimensional hologram, but we perceive it as four dimensional spacetime? Perhaps what we perceive as time is collapse of the multi-dimensional wave function.

    Would the problem of non-locality and spooky action at a distance also go away if matter were a spread-out wave pattern? The smaller a particle, the larger and more spread out it’s wave pattern would be.

    Of course I could be completely wrong.

  • Phil

    Tom Banks,

    I’m disappointed to read that you think that “elementary reasoning” is a flawed strategy, to be replaced by mathematics. As you correctly write, mathematics is just a tool. For mathematics to build a world-picture like the one described by quantum theory, you need more than just the ability to solve equations. You need to know which equations, why, and how they relate to the world you see. For that, you really have nothing BUT “elementary reasoning” to go on.

    If, however, you mean common sense or intuition, I completely agree. There is very little intuitive about quantum mechanics. But that is exactly why there is simply no way that we could have developed it without some rather uncommon reasoning about elementary notions.

  • Tom

    The recent paper by Pusey et al is interesting, but I am afraid it is based on one more assumption that just realism. I don’t know if philosophers have a name for it but the assumption is: states exist at any given time and for any given time, there is a complete description of the state at that instant determining all future outcomes. Let me try to spell it out a bit. You specify any time of your choice t. Then, the probabilities of any outcome to the future of t can be determined based solely upon the complete set of information describing the state at t, and the probabilities are uniquely determined.

    There are retrocausal interpretations where this assumption breaks down. Some of them can also be realist. Any outcome is determined by an alternating web of forward causality and retrocausality, and history has to be considered holistically in time. In such interpretations, the wave function need not exist.

  • Matt

    Chris #25– That’s a common but erroneous assumption based on classical intuition. Sure, when a measurement apparatus (and unavoidably the larger environment) comes into contact with the cat, it all “joins” the superposition.

    But the density matrix for the cat is now diagonal in its own classical basis, with eigenvectors corresponding to alive or dead. The cat itself can be said to have one of those two states, with probabilities encoded as the eigenvalues of the density matrix, and not both.

    And the density matrix for the measurement apparatus is likewise diagonal in its classical basis, with eigenvectors corresponding to having observed a live cat or a dead cat, and magically with the same probability eigenvalues as the cat.

    And the density matrix for the local environment likewise diagonal in its classical basis, etc.

    So who’s not diagonal in the classical basis? A very large “bubble” expanding outward at essentially the speed of light due to thermally (and thermodynamically-irreversibly) radiated photons.

    At any given instant, we can define a bubble system that has not yet decohered to a classical diagonalizing basis for its own density matrix. The state of that instantaneous bubble system is simply its own state vector, full stop. What’s the meaning of that state vector? I don’t know. It simply doesn’t have certain definite classical properties yet, until it decoheres itself a fraction of an instant later. But to say that it describes multiple well-defined classical “many worlds” is a huge and unfounded additional assumption.

    But all of this is beside the point: The cat is not the bubble! Neither is the measurement device. The cat is the cat, and the measurement device is the measurement device, and the local environment is the local environment, and when you ask what’s the state of these systems, their density matrices give you classical probability distributions over classical states.

    The confusion arises when we forget which system we’re talking about. In quantum mechanics, you have to be specific. Just as space and time go from being absolute and universally-agreed-upon to relative and observer-dependent when we replace Newtonian physics with relativity, so too does ontology go from being globally-defined to being locally-defined when we go from classical mechanics to quantum mechanics.

    If you ask for the state of some other system, namely, the bubble, then you may not find that it likewise looks classical yet. But that’s not the system we’re talking about — we’re talking about the cat, or the measurement device. The bubble is the wrong system.

    When I referred to naive classical intuition, I was speaking about precisely this point. Classically, a system is just the sum of its individual parts. But in quantum mechanics, what’s true for the parts — namely, that they each have well-defined classical probability distributions — need not be true for the whole. To naively assume otherwise is to commit the classic fallacy of composition/division.

    That fallacious assumption is based on classical logic (itself based on what’s useful for our Darwinian-evolved human brains to understand), and there’s no reason whatsoever why it must be true in a quantum world. And giving up that often-unexamined assumption gives rise to no contradictions with observation or experiment. It’s just another piece of classical intuition that doesn’t reflect the fundamental nature of reality.

    When you give it up, the troubles go away, and you don’t need many-worlds. So that’s why I’m confused that people are still arguing over all of this.

  • JCM

    “It has been asserted that metaphysical speculation is a thing of the past, and that physical science has extirpated it. The discussion of the categories of existence, however, does not appear to be in danger of coming to an end in our time, and the exercise of speculation continues as fascinating to every fresh mind as it was in the days of Thales.”

    James Clerk Maxwell 1871

  • Chris

    @Matt: where on earth are you getting probabilities from? The cat+universe system _is_ still in a superposition of two states, the linearity of the equations guarantees it.
    I’m not an expert on these fancy density matrix thingumies, but it looks like you put in probabilities by hand at the start, to account for initial conditions that are only uncertain in a classical sense. The only true probabilities you can get out at the end must derive from those, the rest is just ratios of squared amplitudes, the interpretation of which is the whole point at issue.

  • Matt

    Chris #33– Where do we get probabilities from in classical mechanics? In classical mechanics, the basic objects are classical probability distributions over classical states, the latter of which we can regard as elements of a classical configuration space. That’s the axiomatic structure of classical mechanics, and everything else follows from that plus equations of motion.

    In quantum mechanics, classical probability distributions are replaced by density matrices, whose eigenvalues are now the probabilities over quantum states, the latter of which are elements of a complex vector space. That’s the axiomatic assumption of quantum mechanics, and everything else follows from that plus equations of motion.

    Now, turning to your next point, the cat+universe system is difficult to define in general — we don’t even know if there’s a well-defined, closed “universe” system in the first place. (Eternal inflation in particular makes things tricky, although Tom Banks has views on that.) But at any given instant, there is indeed a huge “bubble” system containing the cat that still has a definite state vector with all the superpositions.

    But who cares? That’s not the cat. That’s some other system, and its state is just what it is, some weird, nonclassical-looking state — with no reason to assume its elements in a classical basis are to be regarded as classical, reified “many worlds” — at least briefly until the bubble system, too, decoheres due to irreversible thermal radiation into the even-larger environment.

    The idea that the cat’s status and the bubble’s status have to have some sort of naive classical agreement is just the classical fallacy of division/composition I mentioned before. The axiomatic structure of quantum mechanics I mentioned before — a structure with a natural analogy to the axiomatic structure of classical mechanics — implies that the cat has a classical probability distribution over its two classical alive and dead states. And there are absolutely no contradictions with experiment or observation is taking that view, so why not?

    The cat is not the universe, my friend.

  • Chris

    I think I’d dispute your characterization of classical mechanics. Usually one says that the classical state follows a single sharp path through phase space. Uncertainties in the initial conditions and the introduction of probability distributions is something you can layer on top if you enjoy complications.

    I think I might understand your ontology better now (though I still don’t think I like it). It seems like it has to be fundamentally based on baking probability distributions in at the start. Layering them on top, like I would have described classical mechanics, isn’t going to work.

    We can solve the lack of definedness of the cat+universe system easily enough by climbing inside another, bigger, box before we open the cat’s one. Any interpretation that requires that the universe is finite/infinite open/closed eternally inflating/heading for a big crunch definitely needs to mention that up-front.

    All this talk of bubbles, and my box-within-a-box sounds like Wigner’s friend. How would you describe that experiment? Does the description depend on who you ask and when?

  • Matt

    Chris #35– I think you do indeed understand my ontology better now, and yes, you don’t have to like it if you don’t want to.

    And I agree that it is fundamentally based on baking probability distributions in at the start — by design. After all, probability is going to be in quantum mechanics whether we like it or not, unlike in classical mechanics (unless we work with chaotic classical systems, or classical systems with Langevin dynamics). And deriving probability without putting it in at the start is staggeringly difficult, in large part because there presently exists no agreed-upon, rigorous definition for what probability means.

    Indeed, many disputes over the meaning of quantum mechanics are actually disputes in disguise over the meaning of probability.

    So rather than demanding that quantum mechanics somehow solve all our debates and problems with defining the meaning of probability by generating a notion of probability from scratch, it’s wiser to bake probability in from the start. And, amazingly, it works!

    The present interpretation doesn’t require the universe to be open or eternally inflating. The point is that it is agnostic about the global nature of the universe — it doesn’t depend on the universe needing to be a closed, well-defined system.

    Wigner’s friend is a great way to understand the ontology of this interpretation. Suppose Wigner is really far away from Earth — Alpha Centauri, say, which is about four light years away. His friend on Earth does the experiment on the cat. Almost instantaneously the whole Earth has decohered to the definite alive or dead result — the speed of light is really fast, after all, and it’s essentially impossible to prevent thermally-radiated photons from triggering decoherence.

    But poor Wigner, meanwhile, will have to wait at least four years before he can join the decoherence party. If he were to compute the cat’s density matrix right away, or his friend’s density matrix, or even Earth’s density matrix, then he’d find a classical mixed density matrix over classical alive-or-dead states. (We’re assuming the friend established in advance when the experiment would take place, since Wigner is four light-years away and can’t see the experiment happen in real time.)

    But if Wigner computes the density matrix of the giant instantaneous bubble system whose radius is larger than the number of light-seconds since the cat decohered, then that bubble system is still a pure state as far as the cat experiment is concerned. But so what? The bubble is not the cat. And either way, the bubble is still outside Wigner’s light cone, so it makes no observable difference what it’s state is. The moment the bubble is large enough to enter his own light cone (after four years have passed), he decoheres with it.

    Is it a problem that before this moment, some big bubble system outside Wigner’s light cone is still not in a classical state? Why should we care? It’s only troubling if we insist on a classical notion of how systems and their subsystem must have a classically-consistent ontology.

    The whole time Wigner can assign a definite classical ontology to the cat and even the Earth, just not to this large bubble system, at least until it reaches him. So, in that sense, the description of the cat doesn’t depend on who you ask and when.

  • Carl

    This post seems to share much of common the confusion about QM, and it all starts with one bad assumption: “the wave function collapses, and as a consequence we see classical macro states. Therefore we need to explain what “collapse” means, and what the wave function means before and after.”

    I’m not a practicing physicist, but as far as I can tell this whole line of reasoning is nonsense, because it falls at the first hurdle. Wave functions do not collapse. Ever. There’s nothing in the equation that corresponds to “collapse”, which is why we run into trouble trying to find a “measurement” that causes a “collapse”: none of these concepts are anywhere defined!

    So the question arises: why do we always find classical macro states rather than dead/alive cats? The answer is, we don’t. This question is itself founded in a bad assumption that follows from the “collapse” error. That error is the assumption that if the initial system, a decaying nucleus, is in a superposition of very distant states (Decayed or Not Decayed), then as that superposition propagates to larger and larger systems, those systems are also in equally distant states until you get to a dead/alive cat. At this point you need to invent all the philosophical baggage of measurement, collapse, probability, etc.

    What really [for some value of “real”] happens is this: You start with a system of one particle whose states are widely separated (decayed or not, left slit or right slit, …). As that particle interacts with more particles and their states become entangled, you get a system that is still a superposition of states, but less distant — the all-or-nothing of the initial state does not “infect” the larger system. That system in turn entangles with more particles, and so on until you get to something we call macroscopic, such as a cat. The cat is still in a superposition of states, but we don’t see that because it is in many states, all of which are so similar it would take many times the lifetime of the universe to distinguish them. And of course our brains are in the same situation since they are within the systems they observe.

    Incidentally, I *think* this is a lay and informal explanation (or “story”, to borrow Tom’s term) of what Tom and Chris are saying more rigorously… but I hesitate to impute my words to their opinions, so if the above is wrong, it’s my fault, not theirs!

  • Doubter

    Iodine molecules (I2) show interference in a “two-slit” experiment. That is a rather large mass (atomic mass 254) about which to not be able to say it is either here or there.

  • Chris

    @Carl: but all the equations are linear. Which means the cat you get from a superposed nucleus is exactly the cat you get from a decaying nucleus (dead) plus the cat you get from a non-decaying nucleus (alive) in some ratio (and I guess probably with a phase factor, whatever that means at macroscopic scale).

    Nonlinear equations can have the states get closer as you go up in scale, or appear to pick (mostly) one. Linear equations can’t do that, they maintain the superposition forever. Decoherence lets you prevent them ever interfering again, but the superposition is still there.

    In defending many-worlds I’m accepting the reality of that superposition. (While secretly hoping we can instead figure out how to smuggle in a non-linear component that doesn’t damage anything we know, but achieves the right appearance of collapse).

    I really don’t know what Tom is arguing. If it’s the same as Matt, then I think it starts from being much less certain that the initial superposed nucleus state “really exists”. After all, if we were to go and look at it it would be one or the other. And we know how to calculate the chances, in an admittedly slightly counterintuitive way, so just define that to be how to world works, and stop worrying about it. (Apologies for the likely gross oversimplification.)

  • steven johnson

    Prof. Banks seems to be talking about retrocausality with a straight face. Doesn’t that tell you something about the real complexities of his interpretation? Could they possibly come from trying to insist that many worlds is really different from de Broglie/Bohm from Copenhagen? It seems to me that if the mathematics gives the same results then the theories are identical, and distinguishing Everett and Bohm is as arbitrary as distinguishing Heisenberg, Schroedinger and Feynman.

    Prof. Banks seems to have some sort of fetish about the impossibility of the human mind to grasp the subtleties of true Reality. It seems his commitment to instrumentalism comes from the belief that the true Reality is forever hidden from apprehension of the intellect. Isn’t this good old Kantianism? It seems to me sufficient to define the Ding by what it does in its interactions and by its development in a changing system, and let the an sich and an fuer fend for themselves. This may be quasi-Hegelian in approach, but reviving Kantianism seems to have a great deal to do with philosophical prejudices rather than science. Isn’t the more novel approach more likely to be fruitful?

    If an electron is fired at a slit, there is a minuscule but calculable probability that a virtual electron will take a different path. Since all electrons are identical, there is no distinguishing this virtual electron from “the” electron that was fired. The virtual electron can go through one slit and another indistinguishably virtual electron goes through another slit. Except of course there are many, many virtuals electrons which will for minuscule but calculaable moments affect each other by their electric charges, i.e., interfere. After a period of time, the sampling of the wave of virtual electrons by a fluorescent screen reveals the pattern. The real question is why any virtual electron’s interaction with the screen must take the form of a particle interaction rather than a wave. This is because of conservation of energy and angualar momentum, I should think. Prof. Banks can say that the wave of virtual electrons is not real, more or less by definition, if he wishes, as there is no acorn-like real electron. But the whole wave of virtual electrons seems to be thoroughly described by the wave function.

    Declaring the law of the excluded middle to be invalid doesn’t seem to be classical logic at all. Even ignoring the questions of philosophical precedent, it seems that if you posit static metaphysical entities, then the law of the excluded middle is prereqisite for any rationally intelligible universe or for discourse about any conceivable universe. Aren’t we talking dialectics here? Except without any nasty materialism?

  • Matt

    Chris #39 — I very much appreciate your gracious apology for oversimplifying.

    I’m certainly not saying that I doubt the “initial superposed nucleus state ‘really exists’.” The nucleus’s initial quantum state really exists, and could be described in the decayed/undecayed basis as a superposition of decayed and undecayed states. That doesn’t in any way conflict with what I’ve been saying. When the nucleus decays and starts interacting with the cat, and then eventually the measurement device and the rest of the environment, then the nucleus’s density matrix evolves from a trivial pure state (the linear superposition of decayed and undecayed) to a nontrivial mixed density matrix with 50/50 probability eigenvalues over the decayed state and the undecayed state.

    If you want to call that “collapse”, then fine: The state of the nucleus did indeed change — it was initially in a pure-state superposition, and its final state is one of two definite states with classical probabilities 50/50.

    Indeed, if you consider the density matrix of the nucleus from the initial time to the final time, and smoothly track how it evolves in time, you can see how it smoothly evolves from the original pure-state superposition to the mixture of two definite decayed and undecayed states — the “collapse” isn’t instantaneous, and you can even predict how long the collapse should take. (You’ll see that it’s very fast — indeed, exponentially fast in the number of degrees of freedom — but, again, not instantaneous.)

    How did the nucleus go from pure to mixed under linear time evolution? Well, it wasn’t linear time evolution, because the system is open — an open system evolves according to a (generally) nonlinear equation known as the Lindblad equation (http://en.wikipedia.org/wiki/Lindblad_equation). Systems that are measured are not closed, so there’s your nonlinearity.

    And if you stop and think about it, you realize that no systems every really evolve linearly at all, because there are no perfectly closed systems in nature — certainly no macroscopic systems, except maybe the whole universe (if it’s both well-defined and closed, which are both far from obvious). All systems really evolve according to a nonlinear Lindblad equation, with the linear Schrodinger equation just a useful approximation when the system can be approximated as closed.

    Now, again, you might argue that some huge “bubble” system (say, the whole universe if you like) enclosing everything still somehow exhibits the original nucleus (or cat) superposition, but who cares? One would only care if one demanded that the nucleus’s ontology (or the cat’s) must have a classical notion of agreement with the ontology of the bubble. But giving up that subtle assumption gives rise to no conflicts with experiment, so why keep it?

  • Chris

    But, we’re talking matters of principle, not practicality.
    If it helps, I am considering the state of the entire, well-defined, closed universe, sufficiently long after the cat experiment that everything has been in causal contact with it. I reserve the right to loosen those assumptions later :) but right now they give the clearest indication of where the problem is.

    By dividing the universe into the quantum system, which does include superpositions, and the environment, which is where you obtain the probabilities from, it seems like you’re just back to the old arbitrary Copenhagen divide between the quantum microworld and the classical macroworld.
    When I first heard about decoherence, I thought that was the mistake it was making. But without that separation it still manages to tell you why you lose the interference effects, it just no longer can reproduce a discrete probabilistic choice between two alternatives.

    It seems pretty likely we’re not going to end up agreeing (or even maybe understanding each other)…

    I would very much like someone to do Penrose’s space-based laser cantilever experiment (which I don’t have a reference for, sorry) at undoubtedly extortionate cost. If one can calculate a rate for the decoherence/”collapse” of a macroscopic oscillator, it’s certainly different from his prediction (which involves G) and probably the predictions of other theories.

  • Pingback: Enlaces (20/Noviembre/2011)()

  • Baby Bones

    We do not have appropriate mental pictures of what things are really like.

    For instance, although it may make sense to think about point-like particles and individual waves in themselves, all the phenomena that we can experience for ourselves are collective. A grain of sand is point-like because we imagine it to be so and we can make it behave in a point-like way. That picture, however, breaks down on a finer scale. A water wave appears to be a collective long-range phenomenon despite it being composed of molecules whose motion is not so wavy. Whatever the components of reality are, they resolve themselves into known “collective” behaviors when we measure them. If you are not happy with me calling “point-like” collective, I could say “clumpy” instead. A particle is stuff that is really really clumpy, whereas a plane wave is the least clumpy thing we can imagine except for nothing at all. A wave packet is moderately clumpy, and it resolves itself into either form depending on the clumpiness of the measuring device.

    You could say that it is an error to draw an analogy to, on the one hand, how one picture breaks down at some scale and has to be replaced by another picture, and on the other hand, how logic itself seems to break down when it faces the measurement problem.

    But our imaginations are subject to clumpiness as well. Since humans are very clumpy things, we can’t really imagine ourselves taking both paths when the road forks. We can only fancifully imagine what what it would be like to time travel or have multiple mes and the like. But in acknowledging that is a failing of our imagination doesn’t mean that it does not happen. We are way too clumpy to imagine properly what it would be like to be a wave packet.

    You might think that our restricted view on things means that we can’t get all of the qualities of the information. That may or may not be true. What it does mean is that the categories that we have for sorting phenomena are never quite satisfactory. It could be that 1) the boxes are collectively too small and we will never know what we are missing. Or it could be that 2) they are collectively too large because the idea of a wave and the idea of a particle are too complicated to apply in every sense to something like an electron.

    So in which of my pictures is the wave function real? In which is it imaginary?

  • Matt

    Chris #42– I think we’re indeed talking past each other. Let me make this all simpler.

    Ask me the state of a particular system. I compute its density matrix, and interpret the eigenvalues as probabilities over the definite eigen-state-vectors.

    That’s my basic axiom for quantum mechanics — I am not attempting to derive this axiom from anything else! If there are any other axioms, I throw those out. This is my axiom, and I want to know if it conflicts with observation or experiment in any way.

    I can then regard the actual state of the system as being on one of those eigen-state-vectors, with probability given the corresponding eigenvalue. All of this is in perfect analogy with the classical case of a classical probability distribution over classical states.

    Now, in quantum mechanics, there may be a bigger, enclosing “super-system” whose own density matrix looks very different from that of our smaller, original system contained therein — maybe the original system has a classical density matrix over classical-looking states (alive or dead cat, for example), and the super-system is in some bizarre pure superposition state. But I don’t care, because that super-system is not the cat — indeed, the supersystem is humongous, being R light-seconds in radius after R seconds.

    Where’s the confusion? So what if the super-system is in a weird state? It’s not the cat. It’s not even Earth anymore, since Earth is a tiny fraction of a light-second in size.

    Your confusion seems to be all about the super-system being still in a superposition. I don’t see why that matters unless you decide to make it matter. In particular, I do not try to interpret in any classical sense what the still-superposed quantum state of the super-system means. I don’t say it’s a many-worlds set of classical copies of anything. It’s just what it is, some weird state that doesn’t have well-defined values of certain classical properties. But since it’s not the cat, or even Earth — both of whose density matrices describe a classical ontology with respect to the life or death of the cat — I don’t really care what the state of the super-system is.

    This is not Copenhagen. There’s no fixed Heisenberg cut dividing “small” quantum systems on one side from “big” classical system on the other. Quantum mechanics applies to all systems here. Every system has a density matrix that we can compute, evolving according to a particular Lindblad equation, and that tells us what we need to know about that particular system’s ontology, but not about the ontology of other systems, even if those other systems enclose the system we’re focusing on.

    One thinks of quantum mechanics in this picture as a huge number of systems, all with their own local density matrices and their own resulting local ontologies, interacting with one another and decohering against one another.

    You can fix your sights on a particular system, compute its density matrix (which describes the local ontology of just that system), and then follow the evolution of that density matrix through time according to its Lindblad equation. If the system is very tiny and well-isolated, the density matrix will often look very quantum-mechanical, in the sense that its eigenvectors are often weird quantum superpositions without sharp labels corresponding to classical features, whereas if the system is fairly big and in frequent contact with a larger environment, it will almost always look very classical, in the sense that its eigenvectors remain sharp Gaussians in position and momentum space.

  • gregorylent

    “science is about the world” is wrong, and why science has become a hubris-filled fundamentalist field …

    it is merely a method, and NOT the only one, for exploring existence.

  • AI

    “Is the wave function real/physical, or is it merely a way to calculate probabilities?”

    The map is not the territory.

    Wave function is a tool used to calculate probabilities. It is real only in the sense that as long as those probabilities correctly reproduce results of experiments some relevant aspect of physical reality which governs those experiments is successfully captured by this abstract construct.

    Wave function is an abstract mathematical description of reality, but the fact that this description is successful doesn’t mean that physical reality is made of wave functions (just as the fact that words can be used to describe reality doesn’t mean reality is made of words).

    It also doesn’t mean that better descriptions of reality are not possible although it does constrain them.

  • Chris

    Probably my failure to get how any of this can be self-consistent is the absence of any definition of what the probabilities mean. There may well be philosophical dragons there, but when your world is based quite so fundamentally on them they probably need addressing.
    I thought I could describe how this works in many worlds. But I can’t convince myself the algebra is going to work to get me amplitude squared.

    What interpretation do you put on the partway decohered state? It’s not completely a probability yet. Probably none: you’ll just tell me to wait for it to sort itself out, and then it will be a probability. But as a matter of principle, it never quite exactly is, no matter how many exponentials become involved.
    In classical mechanics I can stop the experiment whenever I want, take the probabilities as they stand, split out some cases separately, continue the calculation, and combine at the end. With these half-decohered objects that won’t be true. So the eigenvalues are only the “probability” for “things” to “happen” when you wait long enough before you ask. With classical probabilities you can slice as finely as you want.

    In the EPR experiment, I’m guessing that the two observers don’t get to have definite corresponding results (they have a superposition of all the corresponding results) until their future light cones touch? And then, only for people in that overlap? Which of course, includes them by the time they meet up to compare results…

  • Matt

    Chris #48– You write “Probably my failure to get how any of this can be self-consistent is the absence of any definition of what the probabilities mean. There may well be philosophical dragons there, but when your world is based quite so fundamentally on them they probably need addressing.”

    I don’t know what probabilities mean, at least rigorously. Nobody does. So I simply take them for granted as primitive, fundamental entities. If there are any lingering troubles with the formulation of quantum mechanics that results, then that’s not the fault of quantum mechanics, but of our limited understanding of the meaning of probability itself.

    So in the axiomatic structure of the formulation of quantum mechanics I’ve been describing, avoiding the question of what probabilities actually mean is a feature, not a bug. It explicitly separates questions about the meaning of probability from questions about quantum mechanics.

    Without a rigorous definition of probability, it’s a fool’s errand to go looking for it in a particular formulation of quantum mechanics. The approach I’ve been describing avoids asking quantum mechanics to somehow generate a rigorous concept of probability from scratch — that’s the problem many-worlders unavoidably have. Just put probability into the theory from the beginning, and leave till later someone coming along (a philosopher?) to define what probability means.

    You also write “What interpretation do you put on the partway decohered state? It’s not completely a probability yet. Probably none: you’ll just tell me to wait for it to sort itself out, and then it will be a probability. But as a matter of principle, it never quite exactly is, no matter how many exponentials become involved.”

    That’s not quite true. At every instant of time, the density matrix is always a Hermitian, positive-definite, trace-one matrix, so you can always diagonalize it, and its eigenvalues are always real, positive, and sum to one. So the eigenvalues always have the interpretation of probabilities, both before, during, and after decoherence. The question now is what states those are probabilities for.

    Before decoherence, of course, the system being studied has a trivial density matrix with a single, unit eigenvalue, so the system has probability one associated to that single superposition-state. But during and after decoherence, the system’s density matrix becomes nontrivial, and develops a nontrivial set of probabilities.

    In addition to the actual eigenvalues smoothly (but exponentially-quickly) changing, the eigen-state-vectors are also smoothly (but exponentially-quickly) changing. In the pre-decoherence state, the single nontrivial eigen-state-vector is a weird quantum-like superposition-state. During the brief time during decoherence, the new eigen-state-vectors look less and less quantum and more and more classical, until at the end, they look exponentially close to being classical states.

    But that’s good enough, unless you want to claim that classical states are measure-zero elements of the Hilbert space. A classical state is one that looks approximately Gaussian in both position- and momentum-space, and there’s always going to be some wiggle room in that definition — plenty to accommodate the above-description of the decoherence process.

  • Frank Martin DiMeglio

    Our growth and becoming other than we are — in conjunction with instantaneity — is fundamental to any truly unified or complete explanation, description, or understanding of physics. This is certainly an ONGOING/CONNECTED AND FUNDAMENTAL EXPERIENCE, as we typically continue living and growing at/from the time of our birth. So, for example, how is it that we are born “ready in advance”.

    Strange how modern physics has basically or entirely neglected this.

    It is impossible to fully and properly understand physics AND thought apart from ALL DIRECT (pertinent, related, and significant) bodily/sensory experience.

    Would anyone seriously state that the shifting and variable nature of thought is entirely separate from the forces/energy of physics (including quantum mechanical)?

  • http://users.ox.ac.uk/~mert0130 David Wallace

    Sean has said most of what I’d have said had I kept up with the discussion over the weekend. But let me make a couple of points:

    (1) On Tom Banks’ distinction between mathematics and words. Firstly, I think Sean is right that the distinction cross-classifies physics and philosophy. (And, as it happens, my professional training is mostly in theoretical physics.)

    Secondly, if you’re interested in answering questions as to what’s going on in physics, of course you’re going to use words. Tom’s original post is written largely in English, and that’s not just as a courtesy to less-mathematically-inclined readers: it’s because it isn’t possible to establish the points he wants to make without some verbal reasoning. (It can’t be, after all, because Tom, and Sean, and I all agree about all of the maths.)

    Thirdly, it’s of course possible, to a very large extent, to make progress in physics by mostly doing maths, and just appealing from time to time to tacitly-understood, pretty-reliable ideas about how the maths relates to observational reality. “Shut up and calculate” is the usual, apposite name for this approach, and it’s a perfectly sensible approach for many purposes, but to do it consistently you have to do both parts. If you stop shutting up and try to explain, in words, why other people are wrong, you can’t consistently criticise them for using verbal reasoning in their discussion.

    (2) On decoherence and density operators. Various people point out, quite correctly (though see below), that the interference phenomena that spoil a probability-based approach to quantum mechanics become negligible at the macroscopic scale due to decoherence processes. That means that you can (near enough) get away with treating macroscopic superpositions as probabilistic.But that doesn’t give you a consistent way of thinking about quantum mechanics unless you can (a) treat microscopic superpositions as probabilistic (which, as I argue in the main post, you can’t), or (b) explain why even a macroscopic-level physical state can look like a probabilistic state (which leads you to the Everett interpretation), or (c) explain just when the transition between the two occurs (which means accepting a collapse of the wavefunction).

    For what it’s worth, I agree with Carl @37 that, since there’s nothing in the equations that corresponds to collapse, we should try to avoid introducing it; so, like Sean, I’m led to (b), which (assuming we don’t add hidden variables or suchlike) basically is the Everett interpretation.

    (As a point of interest, that’s not the majority opinion in philosophy of physics: the Everett interpretation is far more popular among physicists than philosophers. The majority of philosophers do want to introduce collapse, or else do want to add hidden variables or suchlike.)

    (3) Even technically speaking, it’s not really true that we can interpret macroscopic states probabilistically and still regard the dynamics as given by the Schrodinger equation. You can do that if you evolve the system forwards in time, but you’ll get the answer wildly wrong if you evolve it backwards in time. (The decoherent-histories framework is explicitly time-directed, if you want to put it that way.) So I don’t think a consistently probabilistic reading of the quantum state of even a macroscopic system is compatible with the idea that the Schrodinger equation is fundamental, or more generally, with the idea that the fundamental equations of physics are time- (or CPT-) invariant.

  • Matt

    David Wallace (#51)–

    Another option is not to ascribe a probabilistic interpretation directly to state vectors at all — treating them instead as fundamental, irreducible states — and only ascribe a probabilistic interpretation to density matrices. It’s to say that when a system’s density matrix is nontrivial, then its eigenvalues are your probabilities.

    And, rather miraculously, that’s precisely what happens when you actually model a physical measurement process, with a measurement device and an environment. There’s no need to ascribe a second, additional layer of probabilistic interpretation directly to state vectors — at best, it’s redundant to do so. Indeed, when you attempt to do so, you run into all the problems and contradictions brought up by, among others, Pusey, Barrett, and Rudolph.

    The only remaining question is if we need to worry that some super-big system enclosing our experiment (and enclosing the whole Earth, actually) is still carrying the un-decohered superposition, and the answer is that there’s no deep reason to care, except for classical prejudice and a version of the fallacy of composition/division. If you take a system-centric approach to ontology in quantum mechanics — what is each particular system’s density matrix telling you? — then you never need to worry about what any huge all-encompassing system looks like, let alone ascribe it a “many worlds” interpretation.

    What I don’t understand is why people aren’t turning to the extremely straightforward aforementioned interpretation: One layer of probability, manifested as density matrices, full stop. Could you please enlighten us?

  • Aaron F.

    David Wallace asks:

    To make this kind of claim is to say that the “probabilities” of quantum theory don’t obey all of the rules of probability. But in that case, what makes us think that they are probabilities?

    Matt Leifer (unintentionally) replies:

    An argument that I personally find motivating is that quantum theory can be viewed as a noncommutative generalization of classical probability theory, as was first pointed out by von Neumann. My own exposition of this idea is contained in this paper. Even if we don’t always realize it, we are always using this idea whenever we generalize a result from classical to quantum information theory. The idea is so useful, i.e. it has such great explanatory power, that it would be very puzzling if it were a mere accident, but it does appear to be just an accident in most psi-ontic interpretations of quantum theory. For example, try to think about why quantum theory should be formally a generalization of probability theory from a many-worlds point of view.

  • Mark

    This exchange is a near-perfect example of why we use mathematics to explain the world: because natural language is a piss-poor alternative.

  • http://users.ox.ac.uk/~mert0130 David Wallace

    Matt @52: in a word, entanglement.

    The actual mathematical process by which we’re obtaining density operators from pure states is by letting system A get entangled with system B and then tracing out system B. If systems A and B are spin-half particles and we prepare them in a singlet state, the probabilistic interpretation of the density operators of the separate systems isn’t available because (a) it’s indeterminate what the mixture is supposed to be; (b) more importantly, we know that measurement outcomes on the whole system can’t be represented by classical correlations.

    Now, of course, if system A is Schrodinger’s cat, and system B is its environment, the sort of experiments that show up this problem are (to put it mildly) technically impossible to perform. So we can get away with applying a probability reading there (modulo my worry about time asymmetry previously). But now we’re back to the issue of needing to interpret the state differently at macro- and micro-scales, only this time the problem’s at the level of density operators rather than pure states.

    Aaron F@53: Matt Leifer’s argument is very interesting and elegant and I can’t properly do it justice here; the soundbite answer would be, “why think that a non-commutative generalisation of probability is still probability?” It certainly doesn’t seem to be probability if probability=relative frequency, or probability=Bayesian degree of confidence. (Matt also raises a nice philosophical problem for many-worlds: if the world isn’t really probabilistic, how come there are so many analogies between the mathematical structure of quantum theory, and probability theory. I’m going to pass on that question to avoid derailing the discussion.)

    Mark@54: false dichotomy. Even papers in pure maths and theoretical physics advance their argument for the most part in natural language, albeit using plenty of words that don’t turn up in everyday usage; that’s because communication and reasoning take place in language and maths isn’t a language. (Try saying “natural language is a piss-poor alternative to mathematics” in mathematics.) However, much (most?) of what theoretical physicists use language to talk about is abstract mathematics. If the level of maths on this thread is low, it’s mostly because I (and I think others) are trying to keep the discussion accessible to a wide audience.

  • Doubter

    David Wallace wrote: Various people point out, quite correctly (though see below), that the interference phenomena that spoil a probability-based approach to quantum mechanics become negligible at the macroscopic scale due to decoherence processes.

    I believe the FM radio wave from the 93.9 FM WNYC radio station is macroscopic and thus “real” without any decoherence of its underlying photons. I believe this wave can still show interference phenomena.

  • Matt

    David Wallace #55– Thanks for responding. Some comments:

    For a pair of spin-1/2 particles, the spin-singlet state is a measure-zero phenomenon. More generally, density matrices with degenerate eigenvalues are always measure-zero in the space of all density matrices. If there’s even the slightest deviation, the degeneracy is broken and the problem goes away. Since measure-zero phenomena are essentially unrealizable in practical experiments — there is absolutely no way to get a perfect spin-singlet in any realistic experiment — there’s really no problem here.

    One can show by direct calculation that if the experimental apparatus has many degrees of freedom, then even tiny degeneracy breaking in the subject system becomes highly robust. (This is a point that David Albert misses in an appendix to his book on interpretations of quantum mechanics.)

    If accommodating measure-zero possibilities is nonetheless demanded, then the interpretation is still tenable: The density matrix doesn’t pick out any one preferred diagonalizing ontic basis, but we can just interpret that to mean that the system’s underlying ontic state can be anything in the degenerate eigenspace.

    Your second point, that “we know that measurement outcomes on the whole system can’t be represented by classical correlations,” is less clear to me. I think you mean to say that the apparatus+subject composition system is still in a pure state after the experiment.

    But in any realistic experiment, the environment rapidly decoheres the whole apparatus+subject composite system so that its correlations become classical. Then there’s some super-big system (including the whole Earth after a fraction of a second) that still isn’t classical with respect to the correlations, but again, as I’ve been saying, that’s not the subject system (the cat, or the electron), or the apparatus, or even the Earth.

    Asserting that the super-system must have an ontology that agrees with the ontology of all its subsystems in a classical sense is an unexamined assumption — essentially a form of the fallacy of composition/division — but dropping it doesn’t conflict with any observations or experiments. All observers would agree that the subject system and the apparatus and even the Earth are now well-described by nontrivial density matrices consistent with the experiment, but that the super-system is not.

    Certainly none of this is any weirder than pilot waves or many worlds.

    You’ve been very gracious — would you mind explaining my misunderstanding here?

    Thanks!

  • http://users.ox.ac.uk/~mert0130 David Wallace

    Doubter @56: Not all interference phenomena spoil probabilistic interpretations. The radio wave is indeed macroscopic, and indeed you can do interference with it, but that interference consists of macroscopically many identical photons each individually in a superposition, but not entangled with each other. Effectively, the result isn’t a macroscopic superposition: it’s a macroscopically determinate state, no more philosophically problematic than a wave on the lake. The kind of macroscopic superposition that would potentially cause problems for the probability interpretation would be something like a superposition of (all the macroscopically many photons over here) with (all the macroscopically many photons over there). That’s much harder to prepare without decoherence spoiling it.

    Matt @57: Thanks for an interesting response. Here’s an attempt to answer.

    I was probably a bit quick regarding indeterminacy of the density operator as a probabilistic mixture. You’re quite right about degeneracy being a non-issue, of course. Having said that, if a density operator is to be interpreted probabilistically then we can perfectly well consider probability distributions over non-orthogonal states, in which case the problem of indeterminacy recurs.

    Regarding the larger point, look at it this way: the density operators of microscopic systems (like components of singlet states) can’t in general be consistently treated as probabilistic mixtures of pure states, because we can do interference experiments to rule out that interpretation. The density operators of macroscopic systems, agreed, can be. But conceptually, it still seems that a physically-interpreted density operator (the micro sort) is a different kind of thing from a probabilistically-interpreted density operator. So how did the one turn into another? Or put another way, if the physical goings on are determined by the pure state, how did we move from a pure state at the micro level to a probabilistic mixture of pure states at the macro level? Appeal to decoherence (so goes the usual argument) won’t do here, because ultimately we can always include the environment in the process, in which case the dynamics is still unitary.

    I think you’re rejecting that last sentence. Effectively (again: I think!) you’re wanting to draw a principled distinction between open and closed systems, so that when a system (like Earth) really is open, the evolution really should be taken as giving us a probabilistically-interpretable density operator.

    Bell (and most philosophers) would reject that basically on the grounds that it’s imprecise. It relies on a transition from closed to open process that’s by it’s nature not exactly definable. If we really are seeing a change in the fundamental nature of dynamics – from deterministic to indeterministic – we’d better have a precise criterion for it. I suspect you’re not much moved by that objection.

    Let me raise some slightly different-flavoured objections (none are intended to be decisive):

    (1) it’s potentially going to cause trouble for quantum cosmology.
    (2) it’s noteworthy that (at least in the examples I’m aware of) open-system quantum equations are fairly reliably derived by embedding the system in a larger environment (an oscillator or spin bath, say), applying unitary dynamics, and then tracing the environment back out again. (Often we’ll apply the unitary process only for some infinitesimal time, but still.) So the natural reading of the maths does seem to be that the reason the open-system equation applies is that on a larger scale we’ve still got closed-system dynamics.
    (3) It gets in the way of any attempt to understand, rather than postulate, the direction of time. If all dynamics is Schrodinger dynamics, then the underlying dynamics are time-symmetric, and we can ask what additional information (e.g., a low-entropy boundary condition) breaks the symmetry. But the Lindblad equation (say) is explicitly time-irreversible.

    Finally, a quick comment on your last. You say “none of this is any weirder than pilot waves or many worlds”. I actually regard those two possibilities as in very different categories. The pilot wave theory is a modification of the mathematical structure of quantum physics. I think that’s a very bad idea, not because it’s weird but because it’s unmotivated (and because it’s very unclear how to extend it to relativistic field theory, and very clear that any such extension would violate Lorentz covariance). The many-worlds theory, despite the name (cf Chris @25) doesn’t make any alterations to the mathematics; it just takes the “states are physical” assumption and applies it to the Universe as a whole.

  • http://climatesense-norpag.comcast.com Norman

    I think Dave has drifted from science to philosophy which probably makes for solemn discussion at Oxford high tables but has never contributed much to our empirical knowledge of the world.
    Francis Bacon says “There are also Idols formed by the intercourse and association of men with each other, which I call Idols of the Market-place, on account of the commerce and consort of men there. For it is by discourse that men associate; and words are imposed according to the apprehension of the vulgar. And therefore the ill and unfit choice of words wonderfully obstructs the understanding. Nor do the definitions or explanations wherewith in some things learned men are wont to guard and defend themselves, by any means set the matter right. But words plainly force and overrule the understanding, and throw all into confusion, and lead men away into numberless empty controversies and idle fancies.”

  • steven johnson

    ” The pilot wave theory is a modification of the mathematical structure of quantum physics. I think that’s a very bad idea, not because it’s weird but because it’s unmotivated (and because it’s very unclear how to extend it to relativistic field theory, and very clear that any such extension would violate Lorentz covariance). The many-worlds theory, despite the name (cf Chris @25) doesn’t make any alterations to the mathematics; it just takes the “states are physical” assumption and applies it to the Universe as a whole.”

    I’m pretty sure Prof. Valentini could cite some motivations for pilot wave theory. Anyhow, given that de Broglie/Bohm reproduces so many results of QM is it truly well established that pilot wave theory is genuinely different mathematics, rather than a different formulation? Matrix mechanics, wave mechanics, path integrals are all equivalent. What is the theorem(s) that show pilot waves aren’t? Not to be a bug on a plate, but this proof would be interesting to know about.

    As for the objection that the extension of pilot waves would violate Lorentz invariance, it seems that in one respect the division between microscopic and macroscopic could reasonably be interpreted as a transition between different inertial frames. In this sense, is it not possible that orthodox quantum physics also is not Lorentz invariant under all conditions? The whole discussion is about how quantum physics, with its superpostions and unidirectional time doesn’t seem to apply in the everyday world. It might be an odd way of thinking but reading the loss of Lorentz covariance being the reason uniquely quantum phenomena disappear on a macroscopic level could be useful, possibly?

    Despite the enormous experimental evidence for QM and the consistency of the mathematics, quantum physics cannot model the things we see. This may not technically be a paradox but it certainly provokes these endless discussions. Certainly it is why everyday speech calls quantum physics paradoxical.

    Many worlds seems to scoff at the conservation of energy. I tended to think that was in the math too. Where did it go, and how was its disappearance motivated? Perhaps the MWI means that the observable universe may be defined as those spacetime events that obey this symmetry and all the others are in principle unobservable. But where does that come from the math?

    It’s not that I can really judge between pilot wave and many worlds. But it seems clear that quantum weirdness just does not get abolished in any framework. PIlot wave lumps it all into the pilot wave, while many worlds lumps it into the multiverse. The mathematics are consistent. But if we are going to incorporate quantum physics into our model of the universe, an interpretation, even if its weirdness requires years to get used to, needs to be consistent with the everyday world. Or its not a scientific explanation. Or so I think. Relativistic weirdness was gradually assimilated, just like fields or the distinction between momentum and KE. If we can figure out the right kind of quantum weirdness we can get used to it.

  • SupremeFunky

    Before an electron is observed, what is it that exists?

  • Matt

    David Wallace #58– Once again, I appreciate your thoughtful response. Let me address your points in turn.

    You write “if a density operator is to be interpreted probabilistically then we can perfectly well consider probability distributions over non-orthogonal states, in which case the problem of indeterminacy recurs.”

    That seems like a significant logical leap, and an unexamined assumption. Why can we perfectly well consider probability distributions over non-orthogonal states? Even classically, a probability distribution is only sensible when it’s defined over a set of mutually exclusive possibilities — it’s nonsense to say that you’re considering a probability distribution for a coin of 50%=heads, 50%=metal. Analogously, the inability to define a density matrix whose eigenvectors are spin-z and spin-x is just telling you that spin-z and spin-x are not mutually exclusive in quantum mechanics.

    Indeed, suppose you forget this and try to define a classical probability distribution over non-orthogonal spin-z and spin-x states, respectively |up> and |right>. If you forget about density matrices and just try to demand that they have classical respective probabilities p and (1-p), then if you try to compute any expectation values by manually weighting the matrix elements by p and (1-p), you find that the answer is equivalent to tracing the corresponding observable against the density matrix ρ obtained by naively adding p|up><up| + (1-p)|left><left|. So all predictions and all observed experimental results are equivalent to assuming that this particular ρ is the density matrix of the system, but if we diagonalize ρ we see that its true ontic basis is something else entirely.

    So quantum mechanics is telling you that certain probability distributions are as senseless as, like I said, 50%=heads, 50%=metal, because the possibilities aren't mutually exclusive. When you try to go ahead anyway, quantum mechanics calls you out for picking an invalid probability scheme and forces a different probability distribution on you — talk about a helpful, responsible theory!

    Next you write "the density operators of microscopic systems (like components of singlet states) can’t in general be consistently treated as probabilistic mixtures of pure states, because we can do interference experiments to rule out that interpretation." That's only true when we enlarge our scope beyond the system whose density matrix we're considering, but then we're cheating because we're really supposed to use that larger system's density matrix.

    In other words, say I have a couple of entangled particles. If I compute one of their density matrices, then that tells me about the ontology at that moment for that one particle. Its dynamics will be nonlinear, because the system is open to the interactions of the other particles, but if I write down the appropriate Lindblad equation, then I can in principle compute the particle's density matrix at all times, and then I can predict its ontology and all observations about it at all times.

    But if I enlarge my scope to include the other particles, then the one particle's density matrix isn't enough, but it's obvious why: Now I'm asking about a different system — a larger system — and so I need to use that larger system's density matrix. The larger system's density matrix may look quite different, and have a different-looking ontology, but it's a different system, so that's okay — insisting that the ontologies be classically agreeable is just a naive classical assumption.

    For macroscopic systems, that weirdness is much less likely — decoherence propagates outward at essentially the speed of light due to irreversible thermal radiation, so systems and their slightly-larger supersystems have a common ontology.

    But in all situations, we can consistently say that the ontology of a specific system is determined by that system's — and only that system's — own density matrix, with epistemic probabilities dictated by the density matrix's eigenvalues. The weirdness that slightly enlarging the system makes the ontology look suddenly very different is what occurs more often for tiny systems than big ones, but it doesn't alter the axiomatic structure laid out in this paragraph.

    You write that "the natural reading of the maths does seem to be that the reason the open-system equation applies is that on a larger scale we’ve still got closed-system dynamics." But we never have a truly closed system, at least in a universe that's open (in the Einstein sense) — there's always irreversible thermal radiation outward. By localizing questions of ontology — look at a particular system's density matrix to determine its ontology — we eliminate the need to worry about making this interpretation depend on the question of whether the whole universe is an open or closed system.

    Indeed, because there aren't really any actually, perfectly closed systems in Nature, there are only Lindblad equations — the Schrodinger equation is always just an approximation. But that doesn't mean that a unique direction of time is automatically picked out for the universe — there's still the question of why all actual Lindblad equations for all large physical systems in our universe seem to line up, describing an increase in entropy in the same time direction. So the arrow of time question is not trivially resolved at all — it's just as open and interesting a question as when you assume that the Schrodinger equation governs everything.

    I'm not a Bohmian, but I don't agree with your statement that Bohmian mechanics necessarily violates Lorentz invariance. If you take the wave-functional of a quantum field, Psi(phi(x)), and write down its "Schrodinger equation", then taking the polar decomposition of Psi yields a pair of Bohm dynamical equations as before, and it's all perfectly relativistically invariant. Now the degrees of freedom aren't particle positions, of course, but field values, but it all works fine, at least for bosons.

    The trouble is fermions — a fermionic field doesn't have ordinary-number classical values, but Grassmann anticommuting values. I've never seen a convincing way around this problem.

    Your comments are greatly appreciated.

  • Pingback: Thanksgiving 2011: Gene Distribution and Other Topics « blueollie()

  • Charon

    As a working scientist who is only slightly educated about philosophy… that seems like a piss-poor definition of instrumentalism. We don’t introduce the idea of an electron to understand one experiment, we introduce the idea of an electron to understand all relevant experiments – those relating to atomic structure (spectroscopy), those relating to electricity, those relating to synchroton radiation, those relating to chemical reactions, etc. From Occam to Newton to now, no one has ever thought it was a good idea to introduce a concept to explain one thing.

    You can wonder if a concept that explains all observations is “real” or not, but at that point you’re debating definitions of words that have no impact, as far as I can tell. In addition, most physicists understand these things are just approximations, so the answer to the question “is an electron real” is 1) probably not, and 2) who cares, anyway?

    There are a host of relevant issues I’m ignorant of, I’m sure. And I’m glad someone is working on the philosophy of QM – scientists are indeed generally content to just ignore such thorny problems. But your presenting a ridiculous straw man of instrumentalism doesn’t make me trust you…

  • TimG

    @Charon, one experiment or many isn’t relevant to Wallace’s criticism of instrumentalism. He’s saying the proposed elements of reality (be they electrons or wave functions or whatever) aren’t just a convenient formalism for predicting the results of [any number of] experiments. Rather, the whole *point* of the experiments is to probe the nature of reality.

    The point of scientific experimentation is to learn about the world, not just to learn what the result of a particular experiment will be. To illustrate this: If you had a magic oracle that could tell you what the result of any experiment would be, but you had no underlying theoretical understanding of why those experiments gave those results, you wouldn’t just declare science complete. (At least, I should hope not.)

    If the wave function isn’t real, but is just a calculational tool, then it’s no better than an oracle. Sure, you know how the inputs were manipulated to get the outputs, but if it doesn’t describe the underlying elements of reality, then the question of why its predictions hold true is an open one.

    Regarding electrons being an approximation, sure they may be, but an approximation to what? Likewise, if the wave function isn’t real, what is? You can’t tell me scientists don’t care what the underlying nature of reality is. Until somebody mentions quantum mechanics, everyone admits they care very much. As Wallace said, particle physicists want to know if there’s a Higgs boson, cosmologists want to know what dark energy is, etc. They don’t just want a systematic but inexplicable procedure for predicting what happens when you smash protons together at high speed, or point telescopes at distant galaxies, or perform whatever other relevant experiments there may be.

    “Shut up and calculate” is a cop-out. It’s the equivalent of a paleontologist saying, “As long as we know that pretending dinosaurs existed and extrapolating from there will lead us to correct conclusions about the fossil record (or whatever else paleontologists study), then we don’t care if dinosaurs actually existed or not. That’s a question for *philosophers*!”

  • TimG

    In short, “What is the nature of the physical world?” is a scientific question. In fact, it’s the essential scientific question, of which all other such questions are just refinements. Answering that question is the fundamental motivation for scientific experimentation.

    Of course any proposed description of the physical world should be useful for predicting the results of experiments (otherwise it’s untestable), but that doesn’t mean it’s *just* a tool for predicting the results of experiments.

  • WTW

    With all the talk about decoherence and expanding bubbles of decoherence due to thermal radiation, aren’t we forgetting the crucial point about the von Neumann/Dirac definitions of QM? At the instant of a “measurement” (or observation, whatever that is) the quantum state instantaneously jumps to an Eigenstate – non-locally and super-luminally. Experiment after experiment have confirmed this uncomfortable fact. We immediately go from the nice linear, local progression of amplitudes into a non-linear, non-local realm and then back again, whereupon the linear evolution of states resumes. This isn’t “consistent mathematics”. It’s an inconsistent step in an algorithm that we follow that turns out to give correct answers, an internal inconsistency within the basic framework of QM. And it’s a major reason why relativity and QM are inconsistent at their core. (If Wigner’s friend on Alpha Centauri had a system that was still entangled with the system on Earth, it wouldn’t take 4+ light-years for a measurement of one to affect the other.)

    We’ve been just “shutting up and calculating” for about 80 years now, just executing the algorithm without being able to explain how or why it works, even as the math was expanded and extrapolated to our current QM field theories. There is no deep “mathematical logic” behind it; it’s just a procedure we’ve learned to follow. So let’s not forget that this is still what we’re doing — just following the algorithm without understanding it. And keep trying to make the huge intellectual leap this understanding seems to require.

    On a related note: see the papers by David Malament (1996) and Hans Halvorson & Rob Clifton, among others, showing that it is impossible to have a relativistic field theory of localizable “particles”. So, we should also be careful about what we ascribe to the “wave functions” that describe the events we describe as “particle” detections, since we don’t really know what those “particles” are, either.

  • WTW

    (Comment Cont’d)
    Having said that, many recent experiments (such as precise timings of electron state transitions in molecules, and the incremental manipulation of states in super-cold atoms) seem to invalidate the Copenhagen “interpretation” of the QM realm as being unknowable (The Copenhagen interpretation was actually multiple interpretations that changed over time, and was intentionally never very clear). Perhaps as our technological prowess continues to deepen, we can discover some keys to the puzzle that have so far eluded us. Whether our carefully developed mathematical logic (and intuitions) will continue to apply to the physical world remains to be seen. Past appeals to an underlying order based on circles or pentagrams never quite worked out, despite their inherent logic and beauty, but we managed to find other approaches. We may be at another of those transition points — uncomfortable and frustrating while it’s happening, but necessary to be able to move on. With the flood of new data now coming to us, it would seem premature to state that it is beyond our ability to comprehend and to correct our past misconceptions. But to do so, we need to be honest about just how fundamental those misconceptions really are. Learning how to accurately state the problems will be a crucial step in resolving them.

    P.S. – Link to Malament paper: http://www.socsci.uci.edu/~dmalamen/bio/papers/InDefenseofDogma.pdf

  • Matt

    WTW #67, #68

    You write “With all the talk about decoherence and expanding bubbles of decoherence due to thermal radiation, aren’t we forgetting the crucial point about the von Neumann/Dirac definitions of QM? At the instant of a ‘measurement’ (or observation, whatever that is) the quantum state instantaneously jumps to an Eigenstate – non-locally and super-luminally. Experiment after experiment have confirmed this uncomfortable fact. We immediately go from the nice linear, local progression of amplitudes into a non-linear, non-local realm and then back again, whereupon the linear evolution of states resumes. This isn’t ‘consistent mathematics’. It’s an inconsistent step in an algorithm that we follow that turns out to give correct answers, an internal inconsistency within the basic framework of QM.”

    Your statements indicate some misunderstandings about decoherence and density matrices — there’s no more “inconsistent mathematics” here. The system is open during a measurement, and open systems evolve nonlinearly according to the appropriate Lindblad equation; you can show that this causes precisely the sort of hyper-fast nonlinear evolution that we expect from a measurement. You can even compute how long the “collapse” takes to happen.

    Citing the “von Neumann/Dirac” definitions as evidence that quantum mechanics is inconsistent is like citing Aristotle as evidence that biology is inconsistent — the subject has changed over the intervening eons.

  • ed

    “If you had a magic oracle that could tell you what the result of any experiment would be, but you had no underlying theoretical understanding of why those experiments gave those results, you wouldn’t just declare science complete.”

    No, but not because of some fetish for a “theoretical understanding”, but because such a magic oracle would be of little help for designing novel experiments/instruments/gadgets/whatnot.

  • Sam

    I’m a physicist working on quantum computing, and I think this paper is garbage. I’ll remind everyone that this is not a peer reviewed paper. It seems that they are trying to put forth another test for hidden variable theories. I’m very surprised that these people don’t mention anything about John Bell in the paper. The Bell Inequalities (circa 1964) give conditions that will tell you definitively whether or not these hidden variables exist, and they have been experimentally confirmed in the 1970s. Then the 1980s. Then the 1990s. Bell’s inequalities are basically the first proof any group must show that they created an entangled state, they are accepted as fact and put the EPR paradox to rest; quantum mechanics is a complete theory, and no, you can’t think of quantum mechanical objects as classical objects. Sure, wavefunctions are not just statistical tools, I don’t think anyone who has done any substantial quantum mechanics thinks that anymore, especially not any labrats in the AMO field. But they sure aren’t ‘physical’ in the sense that you can pick up a wavefunction and look at it. You need a projection basis, a measurement. The eigenvalues that describe the state describe the physical readouts that we can perform. This paper tries to create a non entangled state and read out in an entangled basis. It all sounds like a sleight of hand trick to me, I wouldn’t give the paper too much attention. It actually looks pretty similar to this paper: http://prl.aps.org/abstract/PRL/v107/i17/e170404 -> two uncoupled states (one pure, one a superposition) that look like an entangled state.

  • Pingback: Tiny Clues to Antarctica's Past - Antarctica's Climate Secrets()

  • Bill Clack

    Think of light as an oscillating soap bubble of EM radiation which can “pop” once on one detector only. Think of a photon as the smallest piece of the bubble allowed using the Planck constant and frequency. The concept of photon is useful to simplify maths, but can lead to a misunderstanding of experimental results. The photon has no meaning except as a mathematical tool of QED. The soap bubble model resolves the paradoxes found in polarization experiments, laser beam splitting and all the rest. Bell’s theorem cannot be applied to a single 3D object to rule out hidden variables. One final observation. Oscillating EM bubbles have a small amount of mass. The so-called dark matter is light itself.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »