Quantum Hyperion

By Sean Carroll | October 23, 2008 7:51 pm

One of the annoying/fascinating things about quantum mechanics is the fact the world doesn’t seem to be quantum-mechanical. When you look at something, it seems to have a location, not a superposition of all possible locations; when it travels from one place to another, it seems to take a path, not a sum over all paths. This frustration was expressed by no lesser a person than Albert Einstein, quoted by Abraham Pais, quoted in turn by David Mermin in a lovely article entitled “Is the Moon There when Nobody Looks?“:

I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I looked at it.

The conventional quantum-mechanical answer would be “Sure, the moon exists when you’re not looking at it. But there is no such thing as `the position of the moon’ when you are not looking at it.”

Nevertheless, astronomers over the centuries have done a pretty good job predicting eclipses as if there really was something called `the position of the moon,’ even when nobody (as far as we know) was looking at it. There is a conventional quantum-mechanical explanation for this, as well: the correspondence principle, which states that the predictions of quantum mechanics in the limit of a very large number of particles (a macroscopic body) approach those of classical Newtonian mechanics. This is one of those vague but invaluable rules of thumb that was formulated by Niels Bohr back in the salad days of quantum mechanics. If it sounds a little hand-wavy, that’s because it is.

The vagueness of the correspondence principle prods a careful physicist into formulating a more precise version, or perhaps coming up with counterexamples. And indeed, counterexamples exist: namely, when the classical predictions for the system in question are chaotic. In chaotic systems, tiny differences in initial conditions grow into substantial differences in the ultimate evolution. It shouldn’t come as any surprise, then, that it is hard to map the predictions for classically chaotic systems onto average values of predictions for quantum observables. Essentially, tiny quantum uncertainties in the state of a chaotic system grow into large quantum uncertainties before too long, and the system is no longer accurately described by a classical limit, even if there are large numbers of particles.

Some years ago, Wojciech Zurek and Juan Pablo Paz described a particularly interesting real-world example of such a system: Hyperion, a moon of Saturn that features an irregular shape and a spongy surface texture.

The orbit of Hyperion around Saturn is fairly predictable; happily, even for lumpy moons, the center of mass follows a smooth path. But the orientation of Hyperion, it turns out, is chaotic — the moon tumbles unpredictably as it orbits, as measured by Voyager 2 as well as Earth-based telescopes. Its orbit is highly elliptical, and resonates with the orbit of Titan, which exerts a torque on its axis. If you knew Hyperion’s orientation fairly precisely at some time, it would be completely unpredictable within a month or so (the Lyapunov exponent is about 40 days). More poetically, if you lived there, you wouldn’t be able to predict when the Sun would next rise.

So — is Hyperion oriented when nobody looks? Zurek and Paz calculate (not recently — this is fun, not breaking news) that if Hyperion were isolated from the rest of the universe, it would evolve into a non-localized quantum state over a period of about 20 years. It’s an impressive example of quantum uncertainty on a macroscopic scale.

Except that Hyperion is not isolated from the rest of the universe. If nothing else, it’s constantly bombarded by photons from the Sun, as well as from the rest of the universe. And those photons have their own quantum states, and when they bounce off Hyperion the states become entangled. But there’s no way to keep track of the states of all those photons after they interact and go their merry way. So when you speak about “the quantum state of Hyperion,” you really mean the state we would get by averaging over all the possible states of the photons we didn’t keep track of. And that averaging process — considering the state of a certain quantum system when we haven’t kept track of the states of the many other systems with which it is entangled — leads to decoherence. Roughly speaking, the photons bouncing off of Hyperion act like a series of many little “observations of the wavefunction,” collapsing it into a state of definite orientation.

So, in the real world, not only does this particular moon (of Saturn) exist when we’re not looking, it’s also in a pretty well-defined orientation — even if, in a simple model that excludes the rest of the universe, its wave function would be all spread out after only 20 years of evolution. As Zurek and Paz conclude, “Decoherence caused by the environment … is not a subterfuge of a theorist, but a fact of life.” (As if one could sensibly distinguish between the two.)

Update: Scientific American has been nice enough to publicly post a feature by Martin Gutzwiller on quantum chaos. Thanks due to George Musser.

CATEGORIZED UNDER: Science
  • http://stereosoup.blogspot.com Harrison

    “One of the annoying/fascinating things about quantum mechanics is the fact the world doesn’t seem to be quantum-mechanical. When you look at something, it seems to have a location, not a superposition of all possible locations; when it travels from one place to another, it seems to take a path, not a sum over all paths.”

    So, a fun idea to toy around with is that how things “seem” and “look” are just constructs of the clump of neurons in your head. We take sensory input from the world around it and interpret it in a way that allows us to avoid predators and seek prey. The mind is a great self-deceiver, however, so much that the interpretation is incredibly difficult to separate from reality for most people. Consider the simple example of color. Nothing in this world has “color”, rather our brains assign colors to surfaces that reflect certain wavelengths of light so that we can differentiate objects in our environment.

    So to get back to your point, it matters little if nothing “seems” to operate according to quantum mechanics on a macro scale. Just because we perceive an object to have a discrete location and orientation does not make it so, it’s just our brain’s cunningly deceptive interpretation of quantum mechanics.

  • http://www.dorianallworthy.com daisy rose

    We see color through atmosphere – there we see perspective and gradations in tone – medley – warm and cool color – everything is always changing – even your feelings – your perceptions !

  • George Musser

    Hi Sean,

    I think I’m missing something. How is classical chaos a potential counterexample to the correspondence principle? The principle holds that quantum mechanics -> classical mechanics in a particular limit, which seems to be separate from the question of whether CM -> chaos.

    Wouldn’t a potential counterexample be some strongly quantum but still large-N system, such as a Bose-Einstein condensate or the universe during the Planck epoch?

    George

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    The correspondence principle says that a large-N system, prepared in a quantum state localized around some classical values, should obey the classical equations of motion corresponding to those values. But a classically chaotic system does not; the necessary quantum uncertainty implies that you are very likely to find the system far away from the classical point you would have predicted.

  • Alex

    Why?

  • chemicalscum

    This commentary by Todd Brun elaborates this problem:

    http://almaak.usc.edu/~tbrun/Data/topics.html

    The field of quantum chaos, as the term is usually applied, is mainly limited to the case of Hamiltonian systems: systems which conserve energy and are chaotic in the classical limit. Dissipative chaos, which is equally important classically and displays rather different behavior, is comparatively little studied quantum mechanically.

    I have looked at a class of chaotic models using the formalisms of Decoherent Histories and Quantum Trajectories. Classically, a dissipative chaotic system tends towards a strange attractor with a fractal structure; that is, the attractor exhibits substructure at all length scales.

    Quantum mechanically this is impossible, as structure on a scale smaller than Planck’s constant has no meaning. As one approaches the classical limit, more and more layers of structure appear; but there is always a limiting scale, at which the existence of quantum uncertainty and noise from the dissipative environment blurs out the fractal.

    Because of this phenomenon, quantities used to characterize classical chaos (such as Lyapunov exponents and fractal dimension) are not well-defined for the equivalent quantum systems. It is possible, however, that extensions of these concepts may still be useful. One useful proposal, due to Schack and Caves, is hypersensitivity to perturbations. I am currently collaborating on studies of this in quantum computers.

  • andy.s

    John Baez also has a discussion of this in week 223 of This Week’s Finds.

    We live in such a weird f’ing universe.

  • kletter

    The atmosphere is quantum-mechanical, as is the sunlight, and the combination creates beautiful sunsets and similar natural spectacles. Or, you could say that solar photons of given wavelengths interact with molecules and atoms that will be excited to a different quantum state, with the short wavelengths tending to interact more strongly with small molecules, which is why the sunsets consist of brilliant reds, yellows, and oranges… which we are only able to sense to to quantum excitations of sheets of photo-sensitive molecules on the backs of our eyeballs.a

  • http://atdotde.blogspot.com Robert

    Alex, if you are looking for a more technical answer: In the chaotic system, different paths leading to very different orientations are do in fact have very similar action (integral of the Lagrangian) measured in multiples of h-bar. But this is what appears in the exponent in the path integral and therefore both configurations contribute to similar amounts.

    However, let me give you one warning: Imagine that we could actually isolate Hyperion from all the photons for quite some time so that no decoherence happens and then you were allowed to look at it as the first person to observe the macroscopic quantum state. What would you expect to see? If you expect a blurry image of a “superposition of many orientations” then I have bad news for you: It would look as classical as always. You couldn’t tell the difference!

    You will always observe some eigenstate of the orientation operator. It’s just that if you repeated this experiemtn you would observe some interference pattern in the distribution of observed orientations. This is just like electrons in the double slit experiment: Each individual electron hit the screen at one specific position. It’s only their number distribution that shows the interference.

    If you like, what happens is that you entangle the state of your brain an the orientation of Hyperion. Considering then only your brane, its state is a classical probability distribution of many possible orientations.

  • ObsessiveMathsFreak

    Chaotic systems have nothing to to with quantum effects. Purely classical chaotic systems are not uncertain. They are deterministic. The chaos in these systems refers to the long term sensitivity of the system to the initial conditions, but given a set of fixed initial conditions, there is no uncertainty(Aside of course from rounding and other numerical errors, which by the nature of the chaotic system, cause solutions to diverge).

  • Jason Dick

    ObsessiveMathsFreak,

    Chaotic systems have nothing to to with quantum effects. Purely classical chaotic systems are not uncertain. They are deterministic. The chaos in these systems refers to the long term sensitivity of the system to the initial conditions, but given a set of fixed initial conditions, there is no uncertainty(Aside of course from rounding and other numerical errors, which by the nature of the chaotic system, cause solutions to diverge).

    The problem is that the quantum effects prevent those initial conditions from being purely classical ones. The very slight deviations from classicality, when combined with the chaotic nature of the system, can produce macroscopic effects if the system doesn’t interact with other systems in the interim.

  • James

    This does seem rather to hark back to the “blogging heads” discussion of QM foundations that Sean and David Albert posted not so long ago. In my judgement, there was no consensus of opinion either within the debate or the subsequent coments. I do not think this problem is any way close to being solved.

    We seem to experience a classical illusion on top of a fundamentaly quantum world. The trendy word for this is “decoherence” but I have never heard a coherent explanation of what this is.

    Many have tried – be it “many-worlds” interpretations, gravitational involvement, “Copenhagen”-style pseudo-philosophy, “hidden variables”, or many others, but I’m not convinced by any of them.

  • jason green

    Isn’t 40 days the Lyapunov time? Lyapunov exponents have units of 1/time.

    The counterexample only holds if the thermodynamic limit (large-N) exists for the distribution of Lyapunov exponents.

  • George Musser

    Re #4, #8, #10: From Sean’s, Robert’s, and Jason’s remarks, I take it that the problem is the quantum fluctuations. I can’t just take an expectation value and plug it into the classical equations because those equations are so sensitive to initial conditions. If so, wouldn’t thermal fluctuations have the same effect?

    Also, could I still define a restricted version of the correspondence principle if I look at a restricted span of time? That is, I can increase the value of N to make the fluctuations proportionately less important and still have classical predictable behavior for a certain period until the sensitive dependence on initial conditions manifests itself.

    George

  • Lawrence B. Crowell

    Why? Chaotic dynamics concerns systems where minor changes in the initial conditions of a system will exponentially grow almost without bounds. This is the touted butterfly effect, where the flapping butterfly wing in Papau New Guinea causes a hurricane here (Katrina?). So suppose you have two systems with initial condition separated by some small amount of distance and momentum (&q, &p), & = delta, in phase space. This small deviation can be due to a number of things, such as the truncation error introduced by a numerical computation. These systems are deterministic, but in order to integrate the system explicitely requires a computer with an infinite floating point capability.

    For linear systems these errors will grow in a linear or maybe polynomial manner. So it will take a very long time for these errors to manifest themselves. However, chaotic systems such as Hamiltonian chaotic systems will exhibit a radical divergence of these errors or differences in initial conditions

    $latex
    Delta q~=~exp^{Lambda t}Delta q’,~Delta p~=~e^{Lambda t}Delta p’
    $

    so these errors or differences in initial condition exponentially grow according to the Lyapunov exponent /.

    Now introduce quantum mechanics. The metric measure of a the Fubini space which gives the fibration over a projective Hilbert space is

    $latex
    s~=~sqrt{langle O^2 rangle~-~langle O rangle^2 }Delta x
    $

    where x is the conjugate variable to the observable O. So the astute might see that this is the Heisenberg uncertainty principle, and s defines an invariant number of units of action, or n-hbar. So a quantum dynamical system will in a zero temperature situation will have all its possible quantum paths diverge (as in a Feynman path integral), but for &t the number of times that a photon carries away entanglement phase from the system there is a stochastic error given by &E. This system is a large N-hbar example of the quantum Zeno effect, but where there is the added issue of chaos. These tiny errors then sum together in a way similar to a drunkard’s walk. Each of these tiny errors is exponentially amplified and they in turn sum together.

    In this way the errors which are introduced by these quantum fluctuations result in a large scale or classical trajectory that differs from the expectation from the quantum path integral of the system. These little decoherent jiggles of Hyperion result in a stochastic change in its rotational position that can’t be predicted.

    Lawrence B. Crowell

  • Robert Blandford

    Isn’t the multiverse another resolution of this paradox?

  • Jerome

    Arxiv. In the ’96 paper Zuric claims

    Motivated by Hyperion, we review salient features of “quantum chaos” and show that decoherence is the essential ingredient of the classical limit, as it enables one to solve the apparent paradox caused by the breakdown of the correspondence principle for classically chaotic systems.

    In a 2005 paper Weibe claims

    We conclude that decoherence is not essential to explain the classical behavior of macroscopic bodies.

    In a 2008 paper Schlosshauer claims

    We show that the controversy is resolved once the very different assumptions underlying these claims are recognized. In doing so, we emphasize the distinct notions of the problem of classicality in the ensemble interpretation of quantum mechanics and in decoherence-based approaches that are aimed at addressing the measurement problem.

    can someone sort this out for me? Why are they disagreeing? what are the different underlying assumptions?

  • Thomas Larsson

    When I look at the moon, I detect a bunch of photons interacting with my retina. This seems compatible with the moon existing a second ago, but who knows what has happened since then. Maybe the moon has been eaten by a Boltzmann brain and reappeared on a different brane in the meantime.

  • Xenophage

    More poetically, if you lived there, you wouldn’t be able to predict when the Sun would next rise.

    Hyperion would then be perfectly described by economics (the ultimate parameterized interpolative curve fit, all fits each proving the others wrong) plus heteroskedasticity (because analytic economics is fundamentally crap for prediction). Hyperion looks like a job for string theory! More studies are needed.

  • Alex

    I was a little rushed and unclear in my question, and while your detailed responses are helpful and appreciated, I was just wondering why Hyperion specifically is so unpredictable, why it’s quantum fluctuations evolve into macroscopic ones while most bodies follow the correspondence principle.

  • Pingback: Friday 24 October 2008 « blueollie

  • http://whenindoubtdo.blogspot.com/ Eugene

    That’s a really cool post.

  • Count Iblis

    If we don’t detect all these photons that have bounced off Hyperion then we shouldn’t say that Hyperion is in a definite orientation if we don’t look.

    If it really is the case that the orientation of Hyperion is fixed before we observe it, then the observer should also be part of that entangled superposition.

    So, the question should really be if for two terms of this entangled quantum state of Hyperion and the rest of the universe corresponding to two different orientations of Hyperion, the state of the brains of people on Earth in these terms are orthogonal.

    One could argue that even this is not enough, because unless we can tell the two brain states apart, our consciousness is the same in the different brain states. If it were different, we could notice the difference, and then we could “feel” what state Hyperion is in without observing it.

    But, we can only be aware of a limited amount of information, so the orientation of Hyperion would be swamped by all other external influences we are exposed to.

  • James

    Count Iblis,

    Not really sure what your point is here, but a couple of comments on your comments:

    “…then the observer should also be part of that entangled superposition”

    This is part of the standard QM orthodoxy, and, I think, an experimentally inescapable conclusion.

    Also, you say that:

    “…the state of the brains of people on Earth in these terms are orthogonal.”

    Not necessarily so – human (or presumably also ET) observers can select a basis for their working model however they like – within mathematical practicality OK, but beyond that it’s up to them. North isn’t orthogonal to North-East, it just depends on what you’re measuring (which is part of the whole problem).

  • Lawrence B. Crowell

    Each photon which interacts with hyperion removes some quantum overlap of superposed states, or reduces the off diagonal elements of the density matrix closer to zero. This overlap or entanglement phase that is lost is “smeared out” in a coarse grained sense. Clearly no observer has the ability to make an accounting of all these events and where these entanglement phases are carried off to. This smearing out has a thermal interpretation, which is due to the fact the sun is a heat source.

    Each one of these decoherent events amounts to a little kick on the moon, which is a sort of quantum zeno process of reducing the quantum state of the moon. Each of these little kicks amounts to a stochastic change in the position and momentum (for Hyperion we have action-angle variables) of the body. Each of these little kicks is then exponentially amplified by the chaotic dynamics of the classical system, and the Lyapunov exponent is a measure for this process. So each of these decoherent kicks are amplified and they sum together. This then results in a drunkard’s walk drift in the dynamics of the body, which is ultimately due to underlying quantum stochasticity.

    The process is what underlies the so called wave function collapse of a wave function in a measurement. What is important is not that there be a conscious being which observes the outcome, but rather that there is a thermal scrambling of entangelment phases of the system with those of the environment or measuring apparatus.

    Lawrence B. Crowell

  • George Musser

    Michael Berry, in his paper on this subject in 2001, argued that a single photon would be enough to induce decoherence, since Hyperion’s angular-momentum levels are so closely spaced.

    Incidentally, Berry made much the same point as I did in #14 about the timescale of classical chaos, but goes onto say that decoherence renders this point moot.

    The claim sometimes made, that chaos amplifies
    quantum indeterminacy, is misleading. The situation is more subtle:
    chaos magnifies any uncertainty, but in the quantum case h has a
    smoothing effect, which would suppress chaos if this suppression were
    not itself suppressed by externally-induced decoherence, that restores
    classicality (including chaos if the classical orbits are unstable).

    George

  • Count Iblis

    James, yes, you can choose whatever basis is convenient for you in pracice. But we are now considering the philosphical question: “Does Hyperion have a definite orientation before we observe it?”

    So, we should consider the slightly less reduced density matrix in which you also keep your own mental state. I cannot observe myself in a superposition of two different mental states, I’m always one part of such a superposition. So, I’m going to assume that there exists a preferred physical basis for the mental states.

    Then, instead of tracing out the mental states, we should consider the reduced density matrices:

    &lt m|rho|m&gt

    where rho is the density matrix in which everything except Hyperion and the observer’s degrees of freedom have been traced out, and m denotes a particular preferred mental basis state. The question is then if this reduced density matrix describes a pure state for general m.

    Only if it is a pure state can we say that before we observe it, Hyperion already was in a definite state. So, paradoxically, a pure state in this case corresponds to a collapsed state and a mixed state corresponds to the system being in a superposition. :)

    As I argued above you should expect that the state will be a mixed state which then means that Hyperion doesn’t have a definite orientation given our mental state before we measure it.

  • Tevin

    My non-scientific mind understood a scant 50% of what you said in your article yet I still found it wildly fascinating.

    Cheers to that.

  • Count Iblis

    Typo:

    instead of tracing out the mental states we project out particular mental basis states, so the reduced density matrix is:

    < m|rho|m>

    Then I was wrong to say that one should look for arbitrary m in the preffered basis. Instead one should try to find out if this can describe a pure state if the brain and thus the mental states m are perturbed by photons scattering off Hyperion and then hitting the observer on Earth. That doesn’t look plausible to me, as any such effects will be swamped by other local effects.

  • James

    Count Iblis,

    You say:

    “…I cannot observe myself in a superposition of two different mental states, I’m always one part of such a superposition. So, I’m going to assume that there exists a preferred physical basis for the mental states.”

    Have you never felt “in two minds” about something? Research seems to have shown that the brain is massiveely parallel, just from a classical point of view, and there are those who would argue that it exploits QM parallelism in its working (personally I’m not convinced, but I can’t rule it out).

    The density matrix – or the reduced one for tht mater – which I remember fondly (not) from work in quantum optics, is a statistical mechanical tool, and sheds no light on fundamental questions – such as what Hyperion is doing when nobody’s looking.

    -James

  • tyler

    Topics like this are why I stick with this blog. If there’s a more fascinating topic in modern science I have yet to encounter it.

  • http://www.freakangels.com Paul

    You roll one die, you can’t predict what number you’re going to get.
    You roll 100000000000000000000000000000000000000000000000 dice, you can predict with accuracy refined to a similar number of decimal places that 1/6th of them will be 1, a 1/6th 2 etc etc. The more you roll, the greater the accuracy of your prediction.

    I’m a total and utter layman when it comes to the fine details of quantum mechanics, but surely the “decohernace” everyone is discussing is just the increasing predictability of increasing numbers of “dice rolls” (or particle interactions) writ large across the universe?
    On our macro scale, we’re used the the ‘dice’ being rigidly defined quantities of discrete numbers between one and six that we can point to and identify. When we observe a single die that is in the process of rolling, we can’t call it 1, 2, 3, 4, 5 or 6, we must define it as something that has the potential of coming up 1, 2, 3, 4, 5 or 6 but is no single one of those things until it stops rolling, and all of those things together all the time.

    When you describe dice like that, they sound just as fantastical as an object behaving in a quantum manner, but they’re still dice. Obviously this is horrifically simplified, but if you remove the image of a comfortably familiar object and remember that we’re dealing with particles, I think the point still stands?

    Apologies if I’m explaining something incredibly rudimentary that everybody is already talking beyond as pre-assumed ^^;

  • Lawrence B. Crowell

    Given that the action for the rotation of this body is probably around 10^50 hbar the uncertainty in the angular position is ~ 10^{-50}. So a single photon which impacts the body across is primary axis d with momentum p = hbar-k can change the angular momentum by L ~ hbar*k*d, with a change in the angular position ~ 1/kd = lambda/d ~ 10^{-6}/10^5 (ball parking the size of the moon here) which is more than enough to change the angular position of the moon far greater than the HUP uncertainty.

    As for density matrices of brain states and the like, one of the whole points of this is to show that there is no need to invoke any mental state of an observer.

    Lawrence B. Crowell

  • sonic

    I’m not so sure the world doesn’t appear quantum mechanical.
    I’m going to watch the baseball game.
    Is the outcome:

    1)Pretermined from events that happened billions of years ago? (Newtonian determinism)
    2)The reult of certain propensities of nature mixed with chance and the free choices of the conscious participants?(orthodox Von Nuemann QM)

    I think the universe I live in is very quantum mechanical.

  • Lawrence B. Crowell

    Paul on Oct 24th, 2008 at 4:19 pm wrote: surely the “decohernace” everyone is discussing is just the increasing predictability of increasing numbers of “dice rolls” (or particle interactions) writ large across the universe?

    No it is not that. Suppose you have a two state system. There are two possible states it can be in |0) and |1). The dual to these states are written as (0| and (1|, which satisy some conditions

    (0|0) = (1|1) = 1

    (0|1) = (1|0) = 0,

    or these states in the state space are perpendicular or as we more often say orthogonal. A general state vector for this system is usually written as

    |Y) = c_0|0) + c_1|1)

    the dual of this vector is then

    (Y| = c*_0(0| + c*_1(1|

    where the star * represents complex conjugation for i = sqrt(-1) changes sign to -i where ever it appears. The terms c_i are complex valued probability amplitudes. What physicists often do is to look at the density matrix which is written as rho = |Y)(Y| so it has components

    rho_{00} = c*_0c_0 = |c_0|^2

    rho_{11} = c*_1c_1

    rho_{01} = c*_0c_1

    rho_{10} = c*_1c_0

    where rho_{01} and rho_{10} are complex conjugations of each other.

    The rho_{01} and rho_{10} off diagonal terms are complex valued and represent phases for the superposition of these states. The diagonal terms just give the probabilities for the two states. If this system is coupled to some complex environment or a shower of photons these phases can become coupled to these external factors and removed. The density matrix is then reduce to the diagonal terms. The systems has decoherently entered into a “collapsed state.”

    Lawrence B. Crowell

  • Count Iblis

    Well, let me reformulate my point without invoking density matrices. If you observe a system it will be in one of the eigenstates of the observable you are measuring. Now suppose that the system has already decohered before you make the measurement due to interactions with the environment. We perform a measurement and find that the system is in some state.

    The question is if the system was in that state before you made that measurement. I claim that this is not the case. We can be sure that after the mesurement, the system has collapsed into some definite eigenstate. So, it is then in a pure state. If before measurement the system were in the same state it would thus already have to be in that same pure state.

    The fact that in the density matrix formalism you find a mixed state is then explained by the fact that had you included your mental state, the density matrix would be diagonal with only 1 nonzero diagonal element. It would then be the tracing out over the possible mental states which yields the mixed state.

    But this would imply that we could in principle have psychic powers to “feel” what the orientation of Hyperion is without observing it. That’s surely not possible, therefore one has to assume that the density matrix that includes your mental states would not describe a pure state (i.e. it would not have a single nonzero diagonal element, or a singe dirac delta in the continuous case).

    Or formulated without referring to density matrices: In the full entangled state of system and rest of the universe, the terms corresponding to different orientations would not necessarily contain mental states that are orthogonal. I.e. if you group the terms for each mental state, it would contain states referring to different orientations.

  • roland

    What does decoherence mean in the context of the many worlds interpretation?
    That it’s likely that the position of a macroscopic object like the moon is rather definitive in a given branch?

  • http://tyrannogenius.blogspot.com Neil B. ?

    Lawrence, you forget that figures giving “probability” for a given state or outcome are based on collapses and specific events already happening, then fed into the decoherence pretended “explanation” of what Wikipedia calls “appearance” of collapse – it’s a circular argument, fallacious in the familiar way. None of you going on about that here or elsewhere really explain why the waves spreading around, interacting or not with each other, don’t just stay patterns of waves as they do in both classical physics and in the pure mathematics of wave evolution (as in the “deterministic evolution of the wave under the Schrodinger equation” etc.) Waves only get connected to “probability” because something weird about the universe gets them to express like that, it still doesn’t make any sense at the fundamental level – you are just putting out double talk that reminds me of the sophistry put out by Wittgenstein’s supporters. As for the density matrix: that’s just a way of talking about waves that we don’t know the particulars of, isn’t it? Like for example, a photon gun that may produce linear x or linear y, we don’t know which. It isn’t really a “state” that anything has per se, in any coherent (I mean, general meaning as a pun) sense.

    BTW, how many of you saw the very interesting and poignant Nova show about Hugh Everett and his musician son Mark? The idea of constantly splitting parallel worlds is cool as a weird idea but I don’t buy it. One thing to consider: in the Schrodinger’s cat situation, there is an unstable nucleus that may or not have decayed after a given time. That means that the cat is superposed alive/dead, etc. But that means that not only is the wave representing say, an emitted beta (electron) spread out over all angles and not just a narrow ray (presumably – given ordinary angular uncertainty) but it can’t even be a nice crisp shell. IOW, emitted particles have to keep “leaking out” over time, and that makes everything even more difficult to sort out. REM that by contrast, usually we see the example put as, the shell expands to reach a spherical screen and must collapse (but still at a given moment!) somewhere on the shell, with impact time being a bit uncertain but not a major issue. Well?

  • Pingback: Gravity is an Important Force | Cosmic Variance

  • Otis

    My question is: Why do we have classical mechanics at all in our universe?

    In the beginning, at the Planck scale, there were no classical localized states. So when and why did decoherence produce the universe that we can live in?

    I have heard it said that our approximately classical world is not fundamental. Instead, it is the result of special cosmic initial conditions, the result of a special quantum state, a relic of how the Big Bang came about.

    So what does physics tell us about the mechanism for Big Bang emergence of localized states such that we can predict when the sun will come up tomorrow?

    This is a fascinating topic. Thanks to Sean for the post and to others for the discussions.

  • http://tyrannogenius.blogspot.com Neil B. ?

    Another issue I have with decoherence: Consider the classic case of the photon split by a beamsplitter. The waves travel at right angles towards distant detectors, separated from the BS and each other by empty space, and perhaps many kilometers away. We can have the photon coherence length much less than distance to detectors. The photon must absorb in one or the other detector, not both. Consider the split wave function as it reaches the detectors. One or the other detector will ping, and then the other one is barred from also pinging. There is no way for any interaction or interference of any actual waves, to reach from the pinged detector to the other one for collapsing the wave that “was there” as we imagine it while it was just propagating. Nothing actually crosses the spatial separation, and the forbidding of the double ping happens immediately despite the distance. (Sure, there’s some “connection” in entanglement but that isn’t the actual influence of one wave on another, in the environmental “decoherence” sense.) There’s no way to make that work out rationally. In any case, I get suspicious when apologists start talking of how something makes X “appear” to happen, that is a danger sign of BS (not a beam splitter!) at work.

  • Count Iblis

    Neil, you can easily work out exacly what happens using the usual formalism of quantum mechanics. What you get is a state like:

    |ping, no ping) + |no ping, ping)

    and then this quickly decoheres.

  • Pingback: Ars Mathematica » Blog Archive

  • Brody Facoum

    A handful of questions:

    9: Robert: Why do we have to introduce our brains? If we treat the standard model as real, what happens at the surface of an arbitrary lens surrounded by the practical vacuum of interplanetary space?

    Whatever happens at that surface can be used to create a partial map of Hyperion; these maps can be constructed (mechanically or logically) at any time the lens is “allowed to look at [Hyperion]“, and the maps are in principle predictable from one moment to the next, even if we hide and unhide Hyperion from this arbitrary lens (and/or any/all other(s)). “Some eigenstate of the orientation operator” really means an infall of particles with which we can construct a *statistically normal* map.

    However, surely a *normal* map differs from a representation of a subset of the *real* state of Hyperion at any point in the recent past not least because of the differences in accelerations of the various parts of Hyperion? Moreover, if our lens is not a massless pointlike event, it will also have its own differences in accelerations at the surface that will influence its interactions with infalling particles from the direction of Hyperion.

    15: Lawrence B Crowell: does your “can’t be predicted” mean (a) “cannot be decided at all”, or (b) “is infeasible to compute”? Do we introduce uncertainty intervals to cope with (a) or with (b)?

    That is, can we confidently do more — or less — than produce a phase space for Hyperion recursively considering the phase space of each of the quantized elements of Hyperion, and admit that the phase space for Hyperion itself may be incomplete because we cannot practically isolate it from external events?

    IOW, is the analysis of the state of Hyperion properly in the domain of statmech, which is how I read Thomas Larsson’s question at 18.? I think you sorta say so in 25 and 33.

    It just seems to me that trying to treat an object on the scale of Hyperion as a full set of quantum events is unnecessarily hard. (It also seems to get harder the more I think about how one would actually go about doing that; how does one account for particles it radiates that might interact only gravitationally for long long long periods of time?)

    Finally, from Sean’s initial posting: ‘So when you speak about “the quantum state of Hyperion,” you really mean the state we would get by averaging over all the possible states of the photons we didn’t keep track of’ — but the photons scattered off or radiated by Hyperion only reflect (pardon the word) the state of Hyperion’s surface, which is a useful boundary that describes its overall orientation, but which *many* microstates can equally describe, particularly when you start considering things below Hyperion’s surface. Right?

    Finally, does knowing that there are lots of interactions happening between Hyperion and events across the universe — environmental decoherence — *really* improve our ability to predict Hyperion’s orientation in, say, 2029?

  • http://mirror2image.wordpress.com Serge

    Suppose there is no photons and other particles hitting Hyperion. Wouldn’t gravity itself enough to exert decoherence ? The same gravity which cause it’s instability…

  • http://www.mpe.mpg.de/~erwin/ Peter Erwin

    Alex @ 20:
    I was just wondering why Hyperion specifically is so unpredictable, why it’s quantum fluctuations evolve into macroscopic ones while most bodies follow the correspondence principle.

    This is because Hyperion is classically chaotic and other moons are not. It doesn’t matter whether the fluctuations or uncertainties in question are quantum-mechanical in scale/origin or not (e.g., does small asteroid X bounce off Hyperion or not, thus giving it a kick); the tumbling of Hyperion is chaotic because of the specific gravitation situation it is in (the overlapping influences of Saturn and Titan, the locations of resonances, Hyperion’s particular shape and moment of inertia, etc.).

    Quantum mechanics doesn’t explain[*] why Hyperion itself is chaotic and other bodies are not; classical mechanics (and the particular conditions of the system) does.

    The argument of Zurek & Paz is that Hyperion does indeed follow the correspondence principle, so that despite the underlying QM nature of reality, Hyperion still tumbles in a classical (and in its case chaotic) fashion. The issue is why — that is, how & why does the correspondence principle still apply in situations that are classically chaotic?

    [*] By which I mean “it’s not necessary (or it doesn’t help) to use QM to explain”

  • http://www.mpe.mpg.de/~erwin/ Peter Erwin

    Brody Facoum @ 44:
    Finally, does knowing that there are lots of interactions happening between Hyperion and events across the universe — environmental decoherence — *really* improve our ability to predict Hyperion’s orientation in, say, 2029?

    No. What environmental decoherence does is ensure that Hyperion’s orientation obeys classical dynamics — that is, that the correspondence principle holds for Hyperion as well as for Titan, Saturn, and other macroscopic objects. Since classical dynamics tells us that Hyperion’s orientation evolves in a chaotic fashion, we are unable to predict its orientation more than a few days in advance.

  • JimV

    Thanks for the best post of the year, possibly the millenium (IMHO). I think it answers a question I had for the previous QM-talk post, although some of the commenters seem to disagree on that. I will never understand QM (unless that means I do!), but it gives me some comfort that smarter people find it is non-supernatural, albeit weird.

  • http://commonsensequantum.blogspot.com/ Arjen Dijksman

    One of the annoying/fascinating things about quantum mechanics is the fact the world doesn’t seem to be quantum-mechanical. When you look at something, it seems to have a location, not a superposition of all possible locations;

    When I receive a photon from an object, it informs me about one of the possible locations of the object: the location of the point of emission of the photon. The location of the object as a whole is not determined precisely through one single measurement, because I don’t have any information about the other possible points of photon emission of the extended object. The problem with quantum measurements is that I receive this information bit by bit and not classically as a whole. If the world doesn’t seem to be quantum-mechanical, it is a matter of perception.

  • Anne

    I am a bit puzzled by the business of needing environmental photons to produce decoherence.

    As I understand it, decoherence occurs when the wave functions of two possible states of a system become sufficiently different (that is, sufficiently nearly orthogonal, right?) that no interference effects between the two states are observable. If that’s the case, then shouldn’t two wildly different histories leading to wildly different orientations produce very nearly orthogonal states? If Hyperion did not decohere, what interference effects would we see?

    Or, let’s take the double-slit experiment, slightly modified: an electron passes through one slit or the other (or, really, both) and passes on to hit the screen. But near one slit let’s put a charged object whose momentum we can check after the experiment. If the charged object is really massive, then the electron will not move it appreciably no matter which path it takes, and the interference pattern will remain (though distorted by the electron’s deflection). If the charged object is light enough, we can measure its momentum after the fact and determine which slit the electron passed through, and the interference pattern must disappear. Thus we have a parameter (mass of the “sensor”) we can adjust from “no observation” to “observation”.

    As I understand decoherence, what happens is that when the “sensor” is light enough to actually tell which path the electron took, it unavoidably entangles its phase with the electron’s wavefunction, producing a phase shift in the “passed through slit A” possibility. This phase shift is random, presumably because of the uncertainty principle as applied to the sensor, the interference pattern on the screen disappears. (Actually you always have a phase shift, but if the object is massive the random part of the phase shift is so small that the interference pattern is not affected.) Put another way, once you include a light detector, the two alternatives “passes through slit A” and “passes through slit B” have such different wave functions that there is no detectable interference effect. Does that sound about right?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Anne, I don’t think it’s quite right to say “decoherence occurs when the wave functions of two possible states of a system become sufficiently different.” A quantum system that is in a pure state, unentangled with anything else, is described by a single wave function. When an electron goes through the two slits, there is a single wave function that goes through both slits, not two different wave functions for the two different alternatives. To get decoherence, you necessarily need some other system to become involved. If there is a sensor at the slits, the wave function of the electron becomes entangled with the wave function of the sensor. Instead of the wave function of the electron being “went through slit 1″ + “went through slit two,” we have a wave function of the entangled electron+sensor system, of the form “went through slit 1, sensed slit 1″ + “went through slit 2, sensed slit 2.”

    If we then threw away our knowledge of the wave function of the sensor, there ceases to be any such thing as “the wave function of the electron.” Instead, there is a statistical (not quantum-mechanical) set of two different wave functions, which can no longer interfere. That’s decoherence — the electron is no longer described by a wave function all its own.

  • Count Iblis

    In case of the sensor measuring the “which part information”, the amplitude of the interference term is given by the inner product of the two states of the sensor corresponding to the sensor deflecting the electron and the sensor not deflecting the electron.

    In case of a perfect detector these two states should be orthogonal (because they should be eigenstates of some observable corresponding to different eigenvalues that tell you through which slit the electron went) and then the interference pattern will vanish.

    The reason why you can’t use a sensor which is heavy has actually a lot to do with decoherence. When the sensor deflects the electron, it absorbs part of the momentum of the electron. If the changed state is to be oerthogonal to the original state (or at least has very small overlap), then it must be very sharply peaked in momentum space.

    But by the uncertainty principle that means that in ordinary space it has to have a large spread (in this real space picture you can see that scattering off the sensor would erase phase information if the wavefunction is wide enough).

    Now, decoherence leads wavefunctions of objects to collapse in the position basis. The coherence length becomes of the order of the thermal de Broglie wavelength which is of the order:

    lambda = hbar/sqrt[m k T]

    where m is the mass and T the temperature.

    This means that in momentum space the coherence length is of the order

    sqrt[m k T]

    So, for large enough m, the coherence length in momentum space would be much larger than the absorbed electron momentum and it becomes impossible to detect the recoils.

  • Anne

    Sean, I think I see what you’re getting at, but let me put my question another way. Let’s take the two-slit experiment with “sensor” (including the sensor in our wavefunction) and pick the position of a null in the interference pattern. Now, as we adjust the mass of the sensor, the probability changes from zero to the classical value. It seems to me that in some sense when you start getting the classical value you have “decoherence”: your system is not behaving quantum-mechanically any more, even though it has not interacted with the outside world at all. Or rather, it is behaving as if “slit A or slit B” were a probabilistic choice rather than a quantum-mechanical goes-both-ways option. Certainly the wave function represents both possibilities, but the superposition has no detectable effects. Does that make sense?

    What I’m getting at is that the weirdness of QM’s “both possibilities happen” is not that they both happen (though I suppose that bothers philosophers) but rather that the fact that they both happen has measurable consequences: in the double-slit experiment, path integrals along the two paths give probability amplitudes that you add, rather than probabilities that you add. If the phases between the two paths are correlated, you see an interference pattern, direct physical evidence of this weird philosophical idea. And so what I’m trying to understand about decoherence is, in a closed system not interacting with the rest of the universe (until I observe it at the end), when do those observable effects of the superposition of states disappear? I don’t mind that Schrodinger’s cat is in a superposition of alive and dead states, but will there be measurable interference effects between the two? Will I be able to measure some quantity which will *show* that the cat was in both states?

  • Count Iblis

    Anne, see this article:

    http://arxiv.org/abs/quant-ph/0210001

  • A Student

    Double Slit Experiment

    Start with Slit A and Slit B

    We assume the particle emitter is centered between the two slits at some distance.

    We assume that there finite number of particles N during the observation time.

    N(A) = number of particles passing through slit A
    N(B) = number of particles passing through slit B
    N(AnB) = number of particles passing through both slit A and B
    N(AuB) = number of particles passing through either A or B

    We can write the following equation:

    N = N(A) + N(B) – N(AnB) + (N – N(AuB))

    divide through by N and we find:

    1 = P(A) + P(B) – P(AnB) + (1 – P(AuB))

    we can then write:

    P(AuB) = P(A) + P(B) – P(AnB)

    This equation is equivalent to:

    sqrt[P(AuB)]^2 = sqrt[P(A)]^2 + sqrt[P(B)]^2 – sqrt[P(AnB)]^2

    This describes the diagonal of a rectangular parallelepiped, where length along the respective axes are:

    x= sqrt[P(A) - P(AnB)]

    y= sqrt[P(B) - P(AnB)]

    z= sqrt[P(AnB)]

    thus:

    P(AuB) = x^2 + y^2 + z^2

    We can also define conditional probabilities as angles where:

    P(AnB) / P(A) = P(B|A) = [sin (theta1)]^2

    P(AnB) / P(B) = P(A|B) = [sin (theta2)]^2

    Since the emitter is centered; then when:

    theta1 = theta2 = pi/2 = 90 degrees

    P(A) = P(B) = P(AnB)

    and

    x= sqrt[0] = 0

    y= sqrt[0] = 0

    and

    P(AuB) = 0 + 0 + z^2

    P(AuB) = P(AnB)

    This means that P(A) and P(B) are perfectly correlated, which occurs when we have no knowledge of which slit the particles travel through.

    When we set up a detector at slit B to count how many particles actually go through (and we’ll assume that our detector is perfect), then:

    P(AuB) = P(A) + P(B)

    because our perfect knowledge of particles through B means:

    P(AnB) = 0

    which occurs when:

    theta1 = theta2 = 0 degrees

    This makes P(A) and P(B) completely independent!

    This means that when we count the particles going through B (or A if we wanted), the number of particles going through slit A and slit B are independent of each other, and we see 2 lines instead of a diffraction pattern!

  • A Student

    p.s.

    This also implies that it is our knowledge of past events (or rather the environment’s retention of data) which leads to decoherence and the observed classical independence of objects

  • kris

    Lawrence @ 25,
    Your explanation is unclear to me. Aren’t you conflating two different things together-the effects of the photon on the wavefunction (actually density matrix) of the object and its “classical dynamics”? The ehrenfest equations of motion that govern the time evolution of the expectation values of position and momentum will have classically chaotic solutions, however, the scattering of the photons affects the quantum density matrix of the system. It seems to me that there is no inconsistency in classically chaotic trajectories being unaffected by quantum fluctuations (since said fluctuations average out into the expectation values).
    Also why should the chaotic behavior in the classical limit be preserved upon adding quantum corrections, except in the sense that it is retained in the evolution of momentum and position expectation values?

  • A Student

    Please cite this comment.

    Under the mathematical construction I provide in comment #55, it becomes apparent that one can in fact cycle the detector at B off and on.

    The Zeeman effect is a splitting of spectral lines in the presence of a static magnetic field.

    Our situation in the double slit experiment is analogous (and no this has nothing to do with metaphysics as suggested in the Gravity is an Important Force post, comment 17 by ObsessiveMathsFreak)

    We are in fact generating a measurable field effect when we introduce the detector. In the Zeeman effect, we see a splitting of spectral lines, in the Double Slit experiment we see a “collapse”.

    The field that is generated is linked to the amount of information we possess. We should be able to cycle the detector on and off, and observe the effects of the cycling. By changing the proportion of on and off time, we are changing the conditional probabilities (angles theta1 and theta2 as discussed in comment #55)

    Presumably the propagation of the change in the observed diffraction pattern should occur at the speed of light.

  • Lawrence B. Crowell

    Neil B. ¤ on Oct 24th, 2008 at 7:00 pm
    Lawrence, you forget that figures giving “probability” for a given state or outcome are based on collapses and specific events already happening, then fed into the decoherence pretended “explanation” of what Wikipedia calls “appearance” of collapse – it’s a circular argument, fallacious in the familiar way.

    BTW, how many of you saw the very interesting and poignant Nova show about Hugh Everett and his musician son Mark?
    —————-
    The modulus squares of the probability amplitudes determine the probabilities. With the reduction of the off diagonal terms c*_0c_1, which is complex valued, the density matrix is reduced to a diagonal matrix of classical-like probabilities. So the quantum dynamics involves a linear summation of probability amplitudes A_i, while the classical-like outcomes are due to a linear summation of probabilities, which are the P_i = |A_i|^2. There are some subtle issues going on here, in particular this is a case of what might be called a transversal problem.

    In classical or macroscopic physics we are able dublicate things, but with QM one can’t do this. One can’t in a unitary manner duplicate a quantum state. Yet if the world is ultimately quantum mechanical then how is it that we can duplicate macroscopic information? Well we can’t perfectly. In TCPI/IP protocal one has to perform parity bit corrections using Hamming distances, or just with photocopiers we have all seen how copies of copies of … , degrade with each iteration. It appears that classical cloning is an approximation, and the noise which results might betray an underlying quantum substrate.

    The MWI received some promotion by NOVA. I am honestly somewhat agnostic about this idea. There are problems with getting MWI consistent with the Born rule.
    —————
    Otis on Oct 24th, 2008 at 7:46 pm wrote
    My question is: Why do we have classical mechanics at all in our universe?

    —————
    That really is a big question. In general how the macroscopic world, which include finite temperature physics or statistical mechanics, fits into quantum physics is an open question. Bohr of course proposed the Copenhagen interpretation, which is really just a “first order” approximation that works well enough for most problems.
    ————–

    Brody Facoum wrote: does your “can’t be predicted” mean (a)”cannot be decided at all”, or (b) “is infeasible to compute”? Do we introduce uncertainty intervals to cope with (a) or with (b)?

    ————–
    I tend to think more according the (b), though if it turns out that the computing resources required to circumvent this infeasibility are greater then what is available in the universe (an infinitely large computer) then this problem segues into (a) as well.

    Ultimately quantum physics is perfectly deterministic. What can be more deterministic than a linear wave equation? However, any large and local region has a myriad of states and making an accounting of them is not possible. So while the universe might from a fine grained perspective be completely quantum mechanical and deterministic from a coarse grained perspective things appears indeterministic and information about states is smeared out or “buried.” In the end the classical world might just be an illusion of sorts, where temperature and time are also illusions of inexact accounting of states. After all a Euclideanized time is t = hbar/kT, for T = temperature, and these both appear to reflect our inexactitude in measuring the world and incapcity in accounting for all possible states.
    ————–
    kris on Oct 25th, 2008 at 5:50 pm wrote
    Lawrence @ 25,
    Your explanation is unclear to me. Aren’t you conflating two different things together-the effects of the photon on the wavefunction (actually density matrix) of the object and its “classical dynamics”?
    ————–
    I guess you are referring to the estimate on the angle of orientation by using momentum across a moment arm or principal axis of the moon. This is meant only as an order of magnitude sort of argument.

    Lawrence B. Crowell

  • TomV

    I have a question regarding Sean’s comment number 4.

    I see that if one prepared a classical chaotic system and a corresponding quantum chaotic system into a corresponding state, and then observed the evolution of both states in time, they would rapidly diverge from each other, because the quantum system would not be in exactly the same initial condition as the classical one.

    Now suppose we prepare an ensemble of such corresponding pairs, prepared around different points in the phase space, and observed the evolution of all of these pairs. Would we find that the quantum system and the classical system had the same attractor? Would the value of the classical Lyapunov exponent tell us how long it would take for the pair to diverge?

    My understanding is that the answer to both of those questions is yes.

  • John Yoo Wu

    I always wondered why when asked that Koan: “does a tree in the woods without an observer make a sound?” why don’t people respond: “does a guy dying an agonizing death in the woods with no observers make a sound?” no it’s a tree.

    where do these philosophers get off. thinking inanimate matter computing.

    The higher your philosopher’s D&D level is the more astounding feats of alternate reality you can. cut off your smell to spite your mind. that’s for level 30 philosophers with the liberal arts feat.

    today is being weird day.

  • Pingback: Timeblog.de » Kosmische Fingerübungen mit Hyperion

  • anonymous

    Sean: “And those photons have their own quantum states, and when they bounce off Hyperion the states become entangled.”

    I was under the impression that entanglement was something that happens to the pairs of particles that are created as a result of the collision and subsequent destruction of other particles. If so, then Hyperion’s entanglement with the universe at large would be a function of how many entangled halves of the particle pairs are retained as a part of its mass while the other half goes on its merry way. Are we assuming that half of the collision-generated (and therefore entangled) particles stay behind? If so, what justifies this assumption?

    Am I missing something?

  • A Student

    After a lashing about diction I just want to make this perfectly clear:

    Zeeman effect – turn contraption on, lines split

    Double slit experiment – turn contraption on, lines converge

    The first case is a magnetic field effect, the second is a unified field effect.

    I don’t need to be lectured about complex amplitudes and summing and squaring and quantum probabilities, I’m quite comfortable with those thank you. If people can’t see what’s before there eyes, I can’t help them.

    Thank you.

  • George Musser

    To contribute to the discussion, I asked our web editor whether we could make an article on quantum chaos available for free, and he agreed. Written by Martin Gutzwiller, it appeared in our January 1992 issue, but I think it remains relevant. See http://www.sciam.com/article.cfm?id=quantum-chaos-subatomic-worlds

    George

  • Pingback: Breakfast on Hyperion « Nannygoat Hill

  • Pingback: Chaos and quantum theory « Later On

  • Terry Bollinger

    A Spacetime View of Quantum Physics
    Terry Bollinger, Oct 29, 2008

    The issue of decoherence resulting from non-isolation was addressed a number of times by Richard Feynman in his writings, although he did not use that particular term. A good example is this quote from his Lectures on Physics (source: http://en.wikipedia.org/wiki/Symmetry#Quantum_objects):

    “… if there is a physical situation in which it is impossible to tell which way it happened, it always interferes; it never fails.”

    The word “interferes” in this context is a quick way of saying that such objects fall under the rules of quantum mechanics, in which they behave more like waves that interfere than like everyday large objects.

    In other words, determining whether an event is quantum can be done simply by asking this question: Does there exist anywhere in the universe information about how the event occurred? If not, the event will follow the rules of quantum mechanics. If such information does exist, that part of the event that is addressed by information will necessarily be classical.

    This has an interesting consequence. It means that quantum events are necessarily “ahistorical,” by which I mean that no single history can be assigned to them. Such quantum events become, as is vividly portrayed by Feynman path integrals, a confluence of all possible histories. Viewed from our information-rich classical perspective, this infinity of slight variations of possible particle paths and histories looks and behaves very much like a wave, provided only that we don’t “poke it” it too hard or in a way that precipitates a historical outcome.

    It is the ahistorical properties of such regions that make them baffling when they are viewed from a classical perspective. Jon Bell showed through his brilliantly insightful thought experiment that led to surmise that from a classical perspective, certain classes of events necessarily require the concept of “instantaneous” action at a distance.

    Intriguingly, saying that such events require “instantaneous” action seriously understates the problem. What really occurs is when a Feynman integral is poked hard is that one particular instance of the infinity of possible paths in the Feynman integral suddenly decides to become real. That sounds innocuous enough until you begin to ponder this point: A Feynman path extends not just across space, but across time.

    That is, forcing a quantum region of spacetime to become “historical” – that is, to have a classically defined outcome that can be observed and recorded – requires not just an “instantaneous” updating of events that could be distant in space, but also of events that could be distant in time. More specifically, you must in effect update the way the event originated in the past, even if that past is very distant. An example would be an in the case of an interstellar photon whose path integral extends several billion years back to the time of its launch from an ancient quasar. An example would be measuring the polarization of an intergalactic photon that started its journey to earth several billion years ago. Such a measurement results in the instantiation of a specific Feynman path that is the equivalent to saying that the photon “always” had a particular polarization since the time it was first emitted billions of years ago.

    A reflex reaction to such scenarios is that they violate causality and so cannot possibly be correct interpretations of quantum mechanics. The very nature of quantum mechanical regions prevents any conflict, however, since a quantum region of spacetime can exist only if there is no historical information about what took place within it. A violation of causality cannot occur because no information has ever emerged from the region to become the cause of something! The very definition of what makes a region quantum simultaneously safeguards such regions from generating causality violations.

    From such issues I would argue that there is a very deep, even tautological relationship between three issues: classical physics, history, and information. If no information about an event exists, it has no history, and so behaves under the rules of quantum mechanics in which classical causality is replaced by the wavelike sum of all possible histories. If information about what happened exists anywhere in the external universe (an observer is emphatically not required, only a bit of data here or there), the event becomes both historical and classical, subject to the standard laws of causality. Notably, Schrödinger’s Cat is actually a very poor example of quantum behavior: The cat simply lives or dies in a fully classical fashion, since the box in which it is hidden from view is nonetheless exuding enormous quantities of information about what is going on inside. There is heat from the cat, sound from breathing and motion, air molecules that are continually bumping and jostling the cat, and even gravity and particle effects. Only if the experiment can be updated to prevent all these other forms of information leakage does it become a viable thought experiment for understanding the nature of quantum physics.

    So, can classical objects become quantum, as described in the article? Of course! But where is one detail that makes the creation of quantum regions for classical objects extraordinarily difficult to do: You have to keep such objects totally, completely, and absolutely incommunicado with the rest of the universe for a significant length of time.

    It is this difficulty of creating information-isolated regions of spacetime for large time that is at the root of why quantum mechanics generally applies only to very small objects. It is easy to isolate an electron in a way that prevents it from exchanging information with the rest of the universe most of the time, while it is very nearly impossible to do the same thing with a spaceship flying through interstellar space. A single photon emitted from the ship and registered classically somewhere else in the universe is sufficient to establish a location and pull the ship out of the world of quantum physics. Clever physics and clever engineering can sometimes create ahistorical regions of space large enough for us to observe, such as superfluids, Bose condensates, and for that matter the light-reflecting conduction bands of metals, but for the most part information binds and bonds us together into a vibrantly complex world of history, information, and classical physics.

  • Brody Facoum

    Lawrence B Crowell and Peter Erwin, thank you.

    Arjen Dijksman: what can you really say about a single detected photon other than its measured quantum state? Surely you can’t be certain that the source of the photon was the event you’re interested in, you can only be mostly certain, if it has a particular frequency for example. You can improve this in laboratories, but what about in astronomy or observational cosmology? A single photon might arrive from the “right” direction with the “right” momentum, but how confident can you be that it’s not an environmental photon which evolves into the state you’re looking for? More importantly when thinking about large classical objects like Hyperion and Earth, what about single photons which are emitted by the event you’re interested in, but which exit the state you’re looking for because of accelerations, scattering or absorption? Did the event “not happen”?

    To bring it back to your comment and this discussion topic,
    can you really say that your single detected photon tells you anything certain about an object in uncontrolled real space, and that the problem with saying anything about such an object is only that you don’t know what other information it is presently sending into the environment that you are not in a position to detect?

    I agree with you that you receive information bit by bit, photon by photon, however. :-)

    Finally, a general question: is there anything like a line separating observations in which the quantum states of individual particles is important and when they are only important probablistically? On the one hand we care about quantum states at some level when looking at the Lyman-Alpha forest or when doing galaxy surveys or studying the CMB or studying spectral lines generally; do we really care much when studying the orbit or rotation of a planet or moon, even one around a distant star?

    Flipping this around, I think I am asking: is the discussion of decoherence a way of explaining that QM is still useful in the case of terrestrial observations of Hyperion? (I think I’m reading Peter Erwin’s comment 46&47 as a sort of “yes” to this question, on the grounds that the correspondence principle drives the selection of a QM theory that predicts classical-scale reality including classical limits on the predictability of dynamic systems). That is, is this whole discussion exploring the behaviour of QM rather than that of Hyperion? (I’ll accept a “duh! stupid question!” answer gladly :-) )

  • http://commonsensequantum.blogspot.com/ Arjen Dijksman

    Brody:

    what can you really say about a single detected photon other than its measured quantum state?

    Nothing I fear, we can only infer something. We aren’t even able to measure its quantum state. We are only able to measure some characteristics about the impact of the photon on the detecting electron which is at the origin of the detection cascade ending at our perceiving cells. The quantum state could well be a mix between |x> and |y>, but my detection process can filter only one of both “orthogonal states”. In that sense, I don’t see any difference between measurements on macroscopic and nanoscopic quantum objects apart from the fact that we receive way much more information about a macroscopic object. The huge quantity of information gathered about a single object allows us to square out the indeterminacy about direction, momentum, previous scattering, diffraction, absorption, non detection events of the photons.

  • Lawrence B. Crowell

    The wave function or state is never directly measured. We only measure an obervable O which acts on a state O|n) = O_n|n). For all we know the state vector is just a mathematical construction we impose and nothing more.

    Measurement is a bit of a strange topic, for a state vector |Y) = sum_n c_n|n) evolves by nice continuous unitary operators. Equivalently, one can consider the obervable operators as evolving by unitary operators in the Heisenberg picture. Yet a measurement is one where the classical “needle” of the detector pops into one of a number of discrete outcomes for a measured observable. So we never measure a system in a superposed state — Schrodinger’s cat is either alive or dead upon looking in the box. Decoherence which removes the entanglement phase of a system reduces the density matrix to a diagonal of probabilities. It does not tell us which of these outcomes are actually measured though.

    We might of course ask why there is a classical world at all. For a system with a large action, given by a huge number of Planck units of action, the path integral describes a set of paths which are very close to each other. The issue which arises is how does the quantum world “know” to cluster around a certain path which we consider classical. This matter Zurek is investigating according to einselection processes. This is a topic which would take considerable writing to describe. Maybe it is a topic which one of the CV blog meisters will take up.

    I will say that the decoherence and einselection approaches to these problems I find preferrable to the alternatives such as the many worlds interpretation.

    Lawrence B. Crowell

  • http://commonsensequantum.blogspot.com/ Arjen Dijksman

    Lawrence: “For all we know the state vector is just a mathematical construction we impose and nothing more”.
    Isn’t the state vector imposed on us, rather than that we impose it as a mathematical construction? It fits so well as a representation of the physical system that I’m inclined to visualize it as the “hard reality” concerning the system. Line-shaped macroscopic objects, which we could also represent by mathematical vectors, follow the same rules:
    - differential evolution d|vector>/dt = i . omega . |vector> (the vector difference is always perpendicular to the vector itself),
    - indeterminacy of measurement result (measuring the location of the object may give any location on the length of the object),
    - steering by a pilot-wave (in a cloud of line-shaped objects, a line-shaped object spins in phase with the other),
    - square-law for probability of detection : . (the cross section between detecting and detector line-shaped objects evolves with the phase of the pilot-wave).

  • Lawrence B. Crowell

    The wave function is not real — it is complex! In effect the state vector is really a sort of construction on our part. The only thing which is real are the eigenvalues of Hermitian operators. Those are the only thing which we actually detect, or which make a detector go “ping!” As for probabilities, it is of course the case those are computed by the amplitudes of the wave function. Yet we really don’t measure those directly, but only infer them with an ensemble of measurements.

    Lawrence B. Crowell

  • http://commonsensequantum.blogspot.com/ Arjen Dijksman

    Lawrence, I agree we don’t measure directly the amplitudes of the wave-function but only infer them from a set of measurements. Concerning the terminology real vs. complex, the distinction is mathematical. Complex values of the wavefunction refer to ‘real’ physics: the change of phase of a rotating vector that represents the particle. Of course measurement values take real values because a QM result is one single number, but this must not hide the fact that there is hard reality behind those measurements: particles that are described by a rotating vector (and not by a point).

  • Lawrence B. Crowell

    The wave function in quantum mechanics only represents the information potentially available about a system. I suppose I don’t think of the wave function as real in the sense of “reification.” It really is just a field-like effect of sorts we use to make sense of things we measure.

    Lawrence B. Crowell

  • Count Iblis

    Lawrence, it is still your brain that detects the ping and it must therefore be the case that there exists a quantum state of your brain that corresponds to haveing heard the ping. Now, you never find yourself in a suprposition between two possible experinmental outcomes like hearig a ping or not hearing a ping. We then have to conclude that while your brain can be in the state

    |ping heard) + |no ping heard)

    You always find yourself in either |no ping heard) or in |ping heard).

    So, there exists basis states in Hilbert space for your brain that correspond to well defined mental experiences.

  • http://tyrannogenius.blogspot.com Neil B. ?

    How come the “detector” D is what collapses a wave function (or whatever it is) and not other things that particles come in contact with? (For LBC and anyone interested.) For example, consider a photon reaching a beam splitter. We normally imagine, the wave is “split” by the BS instead of the photon deciding at the BS to go one way or the other – because we can get interference e.g. in a M-Z interferometer (like all A-channel hits.) But if the photon localizes at a D, why wouldn’t it do so at the half-silvered interface? It can’t be because the BS has no chance of absorbing the photon. Splitters are imperfect and we can even design one for say 20-40-40 performance (absorption and split.) So some photons are absorbed inside the silver, but others continue on as split waves – yet if they hit a fully absorbing detector surface later yet similar to the BS silver, they “hit” at some exclusive spot – why? And like I said, the detectors might be many km apart, no one ever dealt with the implications regarding “interaction” theories.

    I am suspicious of solution schemes like decoherence for such reasons, as well as the problems I brought up before such as when the emission time itself (like radioactive decay) is uncertain during a long period, if we arrange distant separation and isolation of mutually exclusive detection events, why do the waves ever even knot up so to speak at all if they stay waves, etc (none of them adequately IMHO addressed in comments here or anywhere else I have looked.) Just consider for example the issue of coherent superpositions versus mixtures: mathematically, wave amplitudes have a given unique value (are not “vague”, at least in any math we have) and just add up directly. There is no actual distinction between super’s and mixtures, it’s just a way of talking about combining unknown WFs or the time-averaged sort of outcome we get, etc. I mean, you can’t write the distinction as a wave description because that requires simply adding amplitudes. So, we can’t really “have” a mixture of x and y linear polarized versus a CS; the “mixture” is just a shorthand for our not knowing how the two are combined, what their phase is, etc. We just don’t understand, period, why and how systems of waves (which ought to just interact in a common space, and stay waves forever, just like classical waves on water) end up suffering “collapse” episodes during “measurement” – and neither terms in quotes can even be defined in terms of the maths of waves.

    Furthermore, I repeat: you can’t just blithely refer to “probabilities” connected to the waves, that is just something each of us observes in our unique outcome that we actually experience. That is the very thing needing explanation in the first place, it is a special interruption of pure wave evolution that nature (for real) and we (in the math) just “put in by hand”. It can’t be used as part of a circular argument to explain itself later in (IMHO) phony decoherence mumbo jumbo. Really, if the wave interactions on their own were enough to explain what happened, why didn’t earlier quantum physicists realize that? Why didn’t/don’t runs of wave evolution, even with all the extra influences, directly exhibit the collapse to specific points using the math applying to “wave evolution” itself? Instead, they need special convoluted machinations, contrived arguments with unexplained and undiscovered other “worlds” for things to happen in, and suspect phrases like “collapse appear to happen” (sophistry alert!)

    If you think the WF is just a way we calculate etc. then what do you think is the nature of what really goes through space from place to place, as “particles/waves” are generated and absorbed? Well?

    PS: Lawrence, did you work on the Wikipedia “Decoherence” article? I’m curious.

  • Lawrence B. Crowell

    Count Iblis on Oct 31st, 2008 at 9:47 am wrote:
    Lawrence, it is still your brain that detects the ping and it must therefore be the case that there exists a quantum state of your brain that corresponds to haveing heard the ping.
    ——————–

    As far as as known the brain is a purely macroscopic or classical system. The temperature of the brain is too high for quantum coherence and action potentials and the activities of molecules, such as ATP —> ADP kinase activity is purely thermodynamic. Until demonstrated otherwise I don’t think there are quantum events in brains which correspond to quantum measurements.

    Lawrence B. Crowell

  • Brody Facoum

    Wow. To those of you writing civilly about the meat and gristle you work with professionally (and putting up with people’s editing and thinking mistakes that goes with typing into this sort of little box here), thanks!

    I like Lawrence B. Crowell’s take on things in this thread; in particular at comment 75 I think he is saying that we use formal tools to say as much as we can, as confidently as we reasonably can be.

    I have a few more stupid questions.

    Count Iblis re 76 – I have trouble with the idea of being *certain* that a brain is/was in one state or the other. If we look retrospectively at |no ping heard> can we really be sure that the experimenter wasn’t momentarily distracted from the machine that goes ping? Or her tinnitus? If she writes down the observation or non observation of ping in a log book, is that log book really reflective of reality at that time? Isn’t writing down “|ping heard>” really saying “I am reasonably sure I heard a ping moments ago”?

    I think adding in ears, brains and hands just introduces sources of error and uncertainty, and am tempted to think they sometimes get added into conversations about interpretation in part because their intrinsic physical complexity is distracting. Am I missing a good and obvious reason why they should be involved in recording the detector state? Are brains still part of collapse/decoherence/einselection when reading a more reliable, mechanical recording of the detector output at some point hours, days or decades after the experiment?

    Part of Neil B @ 77 seems to be asking about something similar, and somehow in my head this fits with the “we might instead ask why there is a classical world at all” in Otis @ 40 and LBC @ 59. Can one account in the same way for both an experiment which involves small numbers of particles in a small spatial volume, with careful attention to gravitational potential energies and the like AND observations or experiments involving large spatial distances and media that may range from the lithosphere to inter-galactic space? I am thinking less of quantum cosmology than of something like a distant (>> kpc) X-Ray source that reliably produces a small fractional crab that (thanks to a detector and some audio gear) produces a lot of “pings” within range of our hearing when the X-Ray source is above our detector. At a given time we expect a “ping” and write down |ping heard> if we hear it and |ping not heard> if we don’t. The recorded result obviously represents something real happened inside the body of the person writing it down, but can we say anything stronger than than it *probably* represents information about a distant particle event and the geodesic the event’s photon (may have) followed?

  • Lawrence B. Crowell

    Neil B., no I didn’t write the Wikipedia entry on this topic. I have done a little editing here and there on Wiki-p, but nothing extensive.

    There were some ideas about “quantum brains” 10 years ago or more. Penrose sort of got this idea going, and I think the idea is probably flawed. We might suppose that if the brain were quantum mechanical that since our eyes are really extensions of the brain we would be looking at the world through an optical interferometer. The effective aperature of eyes would be equal to the distance between them. So in effect we would see the world through a telescope of sorts. We could see the rings of Saturn with our bare eyes. Of course that is an extreme case, but it makes a point. The brain is a warm system which is too messy for coherent wave functions to be running around.

    Lawrence B. Crowell

  • TP

    The link to the article, “Is the Moon There when Nobody Looks?”, requires us to login. Is there anyway I can access it?

  • http://countiblis.blogspot.com Count Iblis

    Lawrence, I agree that the brain is typically not in a coherent state. But quantum mechanics applies perfectly well to non-coherent states. Since the world is described by quantum mechanics and not classical mechanics, we should not use classical physics when formulating fundamental concepts.

    Of course, the brain will in practice behave in a classical way. But since I am whatever my brain is computing and that computation must go into an entangled superposition with the environment like the “ping” or “no ping” case mentioned above, we must address the issue of the superposition.

    This simply leads to prefered basis states of the form:

    |my exact mental experience)

    These basis states form an incomplete orthogonal set of basis vectors for the brain. You would expect that there are many different brain states corresponing to exactly the same mental experience.

    Brody: I agree that mental experiences do not always give a perfect description of reality. All I’m saying is that since mental experiences exist and since the world is quantum mechanical, the mental experiences correspond to vectors in Hilbert space, even if you see something in a dream. :)

    To make my point in a different way: The fact that the system you are observing has already decohered before you make your observation does not mean quantum mechanics does not apply to.

    You are part of the environment too, so you will decohere too, i.e. you will also go into the entangled superposition. Now, as long as you don’t know the experimetal result, your mental state has NOT decohered with the rest of the universe, even though your brain will have decohered.

    If your mental state would decohere before you are aware of the measurement, then you could not exclude having psychic powers: When standing in front of the apparatus and blindfolded you could perhaps correctly guess the result.

    If we dismiss such psychic powers, then we must assume that in the complicated entangled superposition of apparatus, your brain, and the rest of the universe, your mental state factors out before you take a look.

  • A Student

    If I understand Coleman-Mandula correctly, then an information field would be viewed as a trivial connection between “space-time and internal symmetries”.

    This is what Lisi was thinking about I believe. However, what he failed to realize is that it is trivial because there are likely to be infinite numbers of ways one could make those connections, and it would be highly dependent on the coordinate system you chose to use (it would be relative).

    When we reach a certain threshold energy density it is likely that things that are trivially connected in our vacuum state are non-trivially connected, ie super-symmetric.

  • Brody Facoum

    Iblis @ 82:

    Now, as long as you don’t know the experimetal result, your mental state has NOT decohered with the rest of the universe, even though your brain will have decohered.

    Woah, what? How is one’s mental state physically separable from one’s brain?

    Also, what about those brains who simply aren’t likely to encounter the result — perhaps they aren’t even aware (in the conventional human-scale sense) that the experiment has even taken place? They don’t know the experimental results because they have not read about them, they weren’t there, etc. Okay, they may become “aware” in a physical (as opposed to common terminology) sense if they are in the forward light cone of the experiment, but is that awareness meaningful in a physical sense other than with respect to bounds on the earliest possible time at which that awareness could take place? (I think quantization implies that not all observers in the future light cone of a finite-scale event will receive photons or equivalents from that event, right?) If the information carrier goes from being fast moving particles (a detector with a blinking light across a room) to slow moving particles (a detector that sends a “ping” wave through a room of air) to slow moving macroscopic objects (journals, or even textbooks), is it useful to talk about the superpositions of the brains of the people who may or may not eventually receive that information?

    Since the world is described by quantum mechanics and not classical mechanics

    All I’m saying is that since mental experiences exist and since the world is quantum mechanical

    These are very strong statements.

    the mental experiences correspond to vectors in Hilbert space, even if you see something in a dream

    The fact that the system you are observing has already decohered before you make your observation does not mean quantum mechanics does not apply [to it].

    These are much less strong statements, and I would have no trouble with them if they were stated more like: “QM can be used to describe and analyse these systems formally in a useful way.”

    If in the two strong statements above these you meant: “QM can be used to describe and analyse these systems formally in a way that is more useful than classical mechanics”, I would feel more comfortable than a blanket claim that classical mechanics should be considered invalid and fully obsolete. I still don’t think I’d agree, though — isn’t statistical mechanics useful here too?

    Do you really, literally, mean that QM is fully and/or uniquely in correspondence with nature? Or do you mean that QM is such an accurate lens with which to study natural processes that it should be used even when it seems more awkward than other formal models?

    One of the problems here again revolves around the nature of the human brain and sensory organs when introduced into this sort of discussion. I’m with Lawrence B Crowell on this one, and deleted a paragraph rant while writing my previous comment that was about the visual phototransduction system. That system starts with an unpredictable and information-lossy molecular conformational change (of a retinaldehyde molecule absorbing a photon of the right frequency) which then follows one of two processes. Which process is followed has a probability that highly depends on the recent activity of the cell, which is highly correlated with photon flux (which in turn is highly correlated with ambient lighting). Both processes are limited by the thermodynamic migration of molecules through the cytosol (intracellular solution), both processes are unreliable, and each process may take an arbitrary time to change the charge on the cellular membrane. And all that’s before we even hit the very first neuron in a chain of lossy neurons leading to the primary visual cortex.

    Quantum chemistry can be a useful — and sometimes necessary — tool in studying systems like this in a reductionist way, sure. That’s what parts of molbio and biochem and chemical biology are about in practice, and there is the field of quantum biology too. But is exclusively resorting QM a good way of learning about a complex natural process like this? Lots of people studying small scale life sciences think quantum effects are trivial in practice. Even where quantum effects may be more interesting (like in vision, magnetoreception, photosynthesis and so forth).

    However, I would be more inclined to argue that with respect to things as complex as human vision we can only feasibly compute probabilities for small systems with current tools, and must resort to probabilistic or classical mechanics approximations in practice, with a cutoff at or just below molecular scale. These tools produce useful results, even if they don’t do anything close to quantum mechanical accounting.

    It’s true that uncertainty intervals also tend to increase as we look at larger pieces of the system — in paricular, the brain is so noisy (and dense and large) that we have nowhere near the ability to say definite — or even useful — things about the brain at scales much smaller than mm^3 * ms (for scalar data like volumetric flow, emissions, Doppler data, density and so on; at that resolution, brain 4-imaging is hugely expensive). It’s also true that reductionist approaches to studying individual brain cells and the molecules within them leads to useful results, and not treating quantum effects as trivial may offer some analytical or explanatory power that is otherwise inaccessible to biologists. (Quantum biologists studying work/energy transferases argue that, and I think so did Penrose & Hameroff in their arguments about quantum effects in nanofilimentary structures in neural dendrite structures and in the cytoskeleton, at least). That said, it’s hard to see QM win on a cost/benefit comparison with non-quantum models, independently of whether QM is “real” or “more real” or even just “more accurate” than the non-quantum models.

    Abstraction and information hiding is useful. It enables computational scalability. It is often useful to identify flaws in abstractions for a variety of reasons, but abandoning abstraction and dealing with all the raw data in a maximally-unhidden way seems like much harder work. (For amusement, David Madore, a French mathematician, explores mathematical abstraction elimination in a fun way here: http://www.madore.org/~david/programs/unlambda/#lambda_elim ).

    I don’t mean to discourage you from thinking of Hyperion->brains information flow quantum mechanically. (I have been enjoying and learning from this entire discussion!)

    In fact, I want to flip this whole comment on its head and ask you outright if you are using QM against the Hyperion(or whatever source)-detector-loudspeaker(or lamp)-sensory organ-brain chain of events as a way of thinking about QM itself, rather than about the system? I’m cool with that. I think that’s what I’m doing. :-)

  • A Student

    Broady said
    “They don’t know the experimental results because they have not read about them, they weren’t there, etc. ”

    The “collapse is caused by consciousness” is a bit misleading.

    The underlying reality is that the collapse is a real effect. You build some machine that is capable of observing particles. The machine itself is interacting with the particle stream in such a way that there is a change in particle distribution when the machine cycles between on/off states.

    The output of the machine is a stream of bits. Pieces of information that only mean something based on the correlation of the bit with the presence of a particle. If the correlation is weak, the diffraction pattern will appear more “quantum” if it is strong then the pattern will appear more “classical”.

    There is no need for consciousness for this to occur, you could have a robot programmed to turn the machine on and off and be fully confident that the effect would be the same, even without viewing the output.

    Human brains are quantum mechanical in nature as was observed by Ebbinghaus in 1885. Our memories follow a natural decay function which can be affected by various other actions, but fundamentally underscores the random QM nature of the brain.

    http://en.wikipedia.org/wiki/Forgetting_curve

  • http://commonsensequantum.blogspot.com/ Arjen Dijksman

    Brody @ 79: “I like Lawrence B. Crowell’s take on things in this thread; in particular at comment 75 I think he is saying that we use formal tools to say as much as we can, as confidently as we reasonably can be.”

    Yes, using formal tools to say things as confidently as we reasonably can be: that’s the general spirit of scientific research! But formal tools apply as well to classical physics as to quantum physics. So if we reject reification for quantum physics, we should also reject it for classical physics. I’m always dubitative when there is an avoidance to think in terms of ordinary objects in quantum physics, while using them intensively in classical physics.

  • http://tyrannogenius.blogspot.com Neil B. ?

    I write some long comments so maybe I can just zero in on a couple of issues with concise questions.

    First, to reiterate: How come a conventional “detector” D is what collapses a wave function (or whatever it is) and not other things that particles come in contact with? For example, the beamsplitter in an interferometer. We know the photon wave splits there and does not collapse because it can interfere later.

    2. Lawrence, don’t confuse “real” per existence, about wave functions, with “real” number value versus imaginary. Our being able to assign complex values to the WF is just a procedure, it doesn’t mean nature can’t really hold such a thing. Remember that the complex value is use to show phase difference, which could be represented some other way. Indeed, one can use the analogous complex system to represent relative phase of electrical currents (phasors in that context), that doesn’t keep currents from being “real” per existence. The question still is, what is it that goes through space and how can it condense at a small space even when available detectors are miles apart with no chance of whatever “interference” the decoherence sophistry attempts to imply.

  • Terry Bollinger

    Lawrence B. Crowell on Oct 31st, 2008 at 5:54 pm wrote:

    vvvvvvvvvv
    … There were some ideas about “quantum brains” 10 years ago or more. Penrose sort of got this idea going, and I think the idea is probably flawed … The brain is a warm system which is too messy for coherent wave functions to be running around.
    ^^^^^^^^^^

    I agree heartily with almost all of this. Penrose’s microtubules are very small, to be sure, but they are also quite massive in comparison to the scale of systems in which quantum effects plausibly apply at room temperature.

    The “almost all” qualifier is due to this: While matter is a very poor candidate for room temperature quantum, the same statement cannot safely be made for quasiparticles.

    (Brief background: Quasiparticles are energetic phenomena that are quantized “on top” of the ordinary matter. They are for the most part composed of energy, and so have very low masses, far less than those of electrons. This means conversely that quasiparticles can participate in quantum phenomena such as Bose-Einstein condensation at temperatures for which even a light-weight electron would behave classically.)

    Phonons are a good example. These quasiparticles are the quanta of sound, just as photons are the quanta of light, and they are fully capable of combining into coherent states within ordinary room temperature matter. If this were not the case, the well-known Mossbauer Effect could not exist.

    (Brief background: In Mossbauer, room temperature matter supports an extraordinary precise matching up of gamma frequencies between nuclear emitters and receivers. The gamma ray emissions and detections used require exceedingly precise frequency matches, so much so that a relative velocity of just a few centimeters per second is enough to squelch reception. Such precision is impossible in a fully classically room temperature system, since atoms and their nuclei move so quickly that the Doppler effect would blur the relative gamma frequencies of the emitter and receiver far beyond what is detectable.

    How then can the Mossbauer Effect even exist? One way to look at the situation is to picture the motions of individual atoms as being controlled by a spectrum of “quanta of vibrations.” These quanta range from no motion at all to very rapid vibration.

    Like most pure energy phenomena, phonons are bosons — that is, they obey the “let’s all get together” statistics of Bose-Einstein. Thus not only are the motions of atoms controlled by these phonons, but the phonons themselves can group together to create “super phonons” that all behave in exactly the same way.

    Of particular interest in this case is the ground energy set of such phonons for which motion is zero. This condensate in effect “freezes” a certain percentage of atoms in a material, even in one that is otherwise at room temperature. These non-classically motionless atoms are the ones capable of participating in the extremely motion-sensitive emission and receipt of gamma rays.)

    While I agree that existing models readily eliminate direct quantum behavior for objects as large as microtubules, this is not the same as proving that no quantum effects of any type are possible. A full proof must also show that there are no configurations of quasiparticles that could transfer of information from a quantum state back into the classical matter component of the system.

    The first problem with creating such a proof is the existence of the Mossbauer Effect, which shows that quantum-enabled point-to-point data transfers exist at room temperature. In the case of Mossbauer, such data transfers are enabled by the ability of very lightweight quasiparticles to form Bose-Einstein condensates at room temperature.

    At first it would seem easy to eliminate the relevance of Mossbauer. It does after all rely on nuclear isotopes and gamma rays. Caution is needed, however. The problem is that there is nothing in the physics of the Mossbauer Effect that requires use of gamma rays. The gamma rays of the Mossbauer Effect instead provide a convenient way to detect such effects due to the exceptionally sharp detection lines they produce.

    The hypothesis to be disproven, then, is whether there exist Mossbauer like non-classical transfers of data that follow the same mathematical model as Mossbauer, but which substitute phonon condensates of larger molecules for nuclei, and lower-frequency mechanical (heat) or electromagnetic vibrations in place of gamma rays to transfer data. The possibility of such effects would need to be disproved explicitly to eliminate the possibility of non-classical point-to-point data transfers in room temperature systems.

    Also, a full proof of the irrelevance of quasiparticle-mediated quantum effects in room temperature organic systems would also require a proof that quasiparticles cannot be used to construct qubits, or at least that any qubits constructed in such a fashion cannot then be linked back to the classical components of the system.

    If the possibility of molecular-level non-classical data transfers can be eliminated, I suspect that qubits would trivially fall as a direct consequence. On the other hand, if quantum enabled molecule-to-molecule data transfers can be shown experimentally to exist, disproving the relevance of room temperature qubits to organic systems becomes much more difficult. I suspect that if molecular non-classical data transfers and quasiparticle condensates exist, such components could also be configured to build qubits.

    In short: To complete the assertion that room temperature systems cannot include quantum behaviors, a rigorous analysis of the quasiparticle issue is required. Since non-classical data transfers via the Mossbauer Effect are part of accepted physics, such a proof would need explicitly to eliminate the possibility of translating the Mossbauer model to larger (molecular) units and lower frequency mechanical or electromagnetic phenomena. If such non-classical transfers of data are in fact possible, the proof would have to show that such transfers are irrelevant to the specific case of room-temperature organic systems such as the brain. Finally, if non-classical molecular data transfers are possible, a further proof would be needed that they cannot also be used to construct qubits, or alternatively that any qubits constructed from quasiparticles will be unable to transfer data back into the classical components of the system.

    Cheers,
    Terry Bollinger

  • Lawrence B. Crowell

    Clearly overcomplete coherent laser states are a standard temperature example of where many particles (photons) enter into the same state with the same phase. Of course since photons are massless this is possible. The Massbauer effect, where the recoil response to the emission of a photon is from the whole lattice, is certainly an aspect of how a low mass particle can exhibit entangled or coherent behavior at a high temperature.

    Yet with neurons there are a number of problems. First off the idea that tubulins are quantum signal conduits is doubtful. These are the scaffolding of a cell, and where kinesin and dysin polypeptides walk up and down them. These are literally nano-bots of sorts which mechanically walk! They transport various compounds through a eukaryotic cell. Cells conduct their energetics through ion pumps across the membrane. Mitochondria pump protons across their membranes and the ion pump is the cell’s energy source. Similarly with neurons, an action potential is the offset and reset of the 1.6v potential difference across a cell membrane by the opening of Ca and K ion channel gates. These gates are receptors for certain chemicals such as acetylcholine, seritonin, dopamine etc. The action potential is a sort of wave, but it is one which is contantly pumped in a sense. So the action potential propagating down an axon or dendrite is not a conservative wave, but is more like a wave in the bobbing motion of a bucket being passed in a bucket brigade.

    Of course quantum mechanics has some role in biology, such as the hydrogen bond between purines and pyramidines in the DNA double helix. The action of a photon on a rhodopsin molecule in a retinal cell has some quantum mechanical interpretations, and so forth. Yet there is not much evidence for any quantization on the large. Of course this blog page was on the quantum properties of Hyperion, a large moon of Saturn, so quantum properties might percolate through quantum systems in certain ways we are not as yet aware of.

    Lawrence B. Crowell

  • http://terrybollinger.com/ Terry Bollinger

    > Of course this blog page was on the quantum properties of
    > Hyperion, a large moon of Saturn, so quantum properties might
    > percolate through quantum systems in certain ways we are not
    > as yet aware of.

    Well, yes, I must confess right here to a bad case of off-topic-drifty-thoughtalism!… :)

    > First off the idea that tubulins are quantum signal conduits
    > is doubtful.

    (?) I would be blunter: Tubulins are complex and structural components that are flatly irrelevant to any serious discussion of whether quantum effects exist in organic systems. Focusing on them has held back for decades any serious analysis of whether or not quantum effects can impact in organic systems.

    Tubulins are irrelevant because they are too large and contain too much distinctive state information to participate in quantum effects. I suppose one could propose that quantum-capable quasiparticle waves exist within tubulins, but why in the world would one bother? They are the movable scaffolding of the cell, with well-defined purposes that require no other explanations, especially ones so far afield from their primary purpose. I have remained baffled for decades as to why a mathematical physicist as sharp as Roger Penrose’s has stayed locked in so adamantly to this very poor candidate for room temperature quantum effects.

    Let me be more specific about what I did mean:

    I am proposing that small molecules, such as ordinary water, have sufficiently small state spaces that even within a quite small volume plausible numbers of them could be assumed to participate in ground-state phonon condensates, in the same sense that nuclei do in ordinary Mossbauer. In the remainder of this entry I refer to this idea as low-energy Mossbauer, since it is not not so much a proposal of new physics — the math does not change in any fundamental way, for example — as it is a translation of existing physics from the energetic domain of nuclei to the lower-energy domain of molecules, with identical use of phonon condensates. While I’ve never seen (and never looked) for the idea in the literature, such extrapolations of scale are sufficiently straightforward that I do think some care is needed to eliminate the possibility.

    The second component of low-energy Mossbauer is another translation of scale: Instead of having the phonon-immobilized molecules exchange gamma rays, why not ask whether they might exchange lower-energy photons such as microwave or heat? Or for that matter, higher-order (non-condensate) phonons? This again is not so much new physics as it is a rescaling of the existing Mossbauer model to a lower-energy domain.

    The measurable effect of this low-energy Mossbauer Effect would be anomalously high rates of transfer of exact rotational or vibrational molecular energy between molecules at distances (e.g., inches) that are vastly larger than could be explained using a fully classical model. The fully classical model would in contrast predict only noise and locally mediated (molecule-to-molecule) energy transfers in such situations.

    In contrast, low-energy Mossbauer predicts the existence of a fairly large class of “impossible” transfers of energy between identical molecules at large distances from each other. The transfers would only occur for the primary rotational and vibrational modes of each class of molecules involved. If the transfers occur at all, the frequencies involved would be very precise, just as in Mossbauer. Finally, the distances involved in the transfers would be far larger than the radii of local thermal (and thus fully classical) effects.

    Low-energy Mossbauer Effects would typically be masked thermal noise, since unlike the classical Mossbauer Effect the photons exchanged would be comparable to those of the noisy environment in which they exist. This means that some non-trivial experimental care would be required to detect them. Still, I suspect a clever experimentalist could come up with a good (and probably even cheap) way to look for such effects, since in particular the frequencies involved would be both well-known and would necessarily have sharply defined peaks, like Mossbauer.

    Where I wonder about the plausibility of low-energy Mossbauer, though, is that it seems unavoidably to imply some pretty odd constraints on some very well-studied systems. Take water, for example. The existence of low-energy Mossbauer in ordinary water would unavoidably imply that a glass of drinking water contains molecules that are “stitched together” by networks of photon and possibly phonon exchanges that cannot be modeled classically.

    At the very least, the existence of such networks in water would have entropic implications, since information would constantly be exchanged in non-local ways that would be better modeled using a collection of simultaneous and intermixed Bose-Einstein condensates. The decay of such structures would necessarily take longer than is possible with a purely classical model, and so would give the water a sort of “memory effect” that should not be there. Also, since different condensates could exist at the same time, a new range of variables would be introduced in which one glass of water is no longer the “same” as another that has a different condensate configuration.

    I would think that such effects would have been noticed by experimentalists, at least peripherally. If no such effects have ever been seen, this would argue against the existence of low-energy Mossbauer Effect.

    Regarding Hyperion: Covered that in my first entry, seriously I did. I just prefer Dr Feynman’s terminology and perspective. Decoherence is fine, but I think it’s fair to say that it is really is just another way of describing how information emerges from a quantum system.

    Regarding the excellent question of how a photon “decides” whether to be absorbed by one atom (an information-creating event that destroys coherence) or reflected from a huge array of atoms (coherence is maintained):

    If you have not already, be sure to pick up a copy of Feynman’s “QED: The Strange Theory of Light and Matter.” Snip out Zee’s intro (just kidding… no, actually, I’m not) and settle in for a good read. Not only will this book _not_ answer your question, it will leave you more frustrated than before. This is what is so great about it! Feynman pulls no punches in describing how difficult and deep your question truly is. Yet bizarrely, by the end of his book he will nonetheless given you the ability to calculate, in principle at least, exactly how many photons will “decide” to do one or the other, for any imaginable experimental setup.

    Feynman also points out that even Isaac Newton pondered your question and realized how profound it is –- a remarkable achievement for someone who lived hundreds of years before quantum mechanics came into being.

    And if you want to know how Newton managed to contemplate such absorption probabilities for a particle that was not known to exist until Einstein postulated it (his Nobel Prize was for that work, not relativity)… why, then, read QED! (And no, I don’t get a cut, I just like the book a lot.)

    Cheers,
    Terry Bollinger

  • Lawrence B. Crowell

    Terry Bollinger: “imply that a glass of drinking water contains molecules that are “stitched together” by networks of photon and possibly phonon exchanges… ”

    That might happen with ice.

    The problem is that for this sort of physics to take place it would have to involves the quasi-crystaline structure of polypeptides. Of course biology is not compatible with gamma rays. There are phonon physics associated with how replicase moves on a DNA strand. The ATP to ADP energy exhange with each step causes a quanta of vibration to move along the 5′-3′ strand which by recoil bumps the replicase to the next nucleotide. Okazaki fragments for the 3′-5′ strand replication are put together by more standard chemical processes.

    Polypeptides are quasi-crystaline (like) structures. In fact DNA has a 10 nucleotide per 2-pi twist in the A and B conformational forms, and this is also mirrored in dihedral angles in some polypeptides. I think this has some connections with fractal geometry and chaos theory, which if there are quantum aspects to their physics leads to a huge area largely not well known. I could go on about this at considerable length, but work and time (and this is election day) preclude that possibility for now.

    Lawrence B. Crowell

  • http://terrybollinger.com/ Terry Bollinger

    Quick comments: Your ice idea is interesting! It is also closer to traditional Mossbauer, which also uses solids.

    If you are suggesting that liquid-to-solid phase transitions in general could be interpretable as including coherent Bose-Einstein phonon condensation components… that would be an interesting alternative on how to view crystallization, and certainly not one I ever recall bumping into.

    Here’s a bit of elaboration on that idea, using brainstorming mode (by which I mean exploring the concept space, but not yet attempting to quantify or disprove the theorem): Crystal faces are macroscopic results of nanoscale assembly, with ratios of emergent sizes to creation component sizes that are truly astronomical. Could these emergent features be coordinated in part by the unrecognized existence of large-scale phonon Bose-Einstein condensates during the crystallization process?

    For example, the assembly components that generate natural beryl crystals are in the Angstrom range, consisting of beryllium, aluminum, silicon, oxygen, yet large natural crystals of beryl can have very flat faces in the order of a meter across. That means a highly parallelized atomic-level crystallization process can easily generated well-defined emergent structures with features 10 orders of magnitude larger.

    A comparison: This is roughly the same as 1 millimeter ants paving all of Asia, Europe, and Africa with a platform that remains level over that entire area during the construction process. Pretty decent group coordination, that!

    (Quick critique: Extremely high relative reaction rates at the layer-addition shelf edges versus the flat surfaces may sufficient to explain the planar face. Thus the quick phonon idea could possibly be whacked away using Occam’s Razor and no some good reaction rate data.)

    (Quick counter: The reaction data may inadvertently _include_ hidden coherent phonon that have never been recognized, and thus never adequately analyzed. Familiarity and an unexamined assumption that “this is all well known stuff” could be hiding an interesting phenomenon that has not been adequately examined or quantified.)

    Brainstorming mode again: Water should be largely transparent to microwaves, since when it is hot it gives off radiation that is mostly in the much higher infrared range (the heat one feels when you put your hand near, but not directly over (that’s steam), a hot cup of coffee. Why do microwaves then heat water so well? To put the issue in terms of an analogy, microwaves heating water to the point where it gives off infrared radiation is a bit like shining an infrared heat lamp on a piece of goal and causing the coal give off blue light. There’s a definite “upping of frequencies” that at a first approximation is a bit hard to explain.

    Theorem: The microwaves are actually interacting with a patchy network Bose-Einstein phonon condensates. Many of these phonon condensates include sufficiently large total masses of water molecules that they resonate easily with the comparatively low frequency microwaves. The heating simultaneously causes the condensates to break down, resulting in molecular-level “pieces” (water molecules) whose vibration frequencies are much higher, in the infrared range.

    Your comments on long-range order: 1D and 2D constrained systems should encourage Bose-Einstein condensation. Some of Peierl’s early work (he actually got a lot of that from a German fellow whose name escapes me at the moment) on 1D effects that lead to alternating single-like and double-like bonds in long polymer chains (they are actually quasiparticle bonds composed of Fermi sea waves, and are _not_ really localized electrons) comes to mind, although that is in the fermion domain mostly.

    Cheers,
    Terry

  • Lawrence B. Crowell

    Microwaves heat water because they are resonant with lots of tightly spaced vibrational modes of the molecule’s dipole moment. The H_2O molecule appears as a “Mickey Mouse” head-like structure, where the hydrogens form the “ears.” There are two filled p orbitals that stick out in the opposing directions, giving rise to a tetrahedral-like structure. In this case two of the vertices are positively charged, where the H-atoms are, and the other two vertices (p-orbitals) are negatively charged. The oxygen sits near the barycenter of the tetrahedron. A microwave field will then interact with this system as two dipoles or a net quadrupole, which causes the vertices to oscillate and the tetrahedron is deformed by being periodically squashed and distended in resonance with the microwave field. So each atom is vibrating in response to the field and they collide with each other, converting this vibrational energy to translational energy in the motion of the molecules. Statistically this then heats up water.

    It is one reason that ice is harder to heat up in microwaves. Since the molecules are bound in a crystalline lattice the conversion of vibrational energy to translational energy is less efficient. As a result the H_2O atoms saturate quickly with vibrational energy and do not absorb as much microwave energy. For this reason the defrost cycle on microwave ovens is a lower setting, turning on and off the magneton so the fields in the cavity don’t feedback too much. The magneton is feathered to give the ice more time to thermalize its vibrational energy.

    The idea for the microwave oven came with radar during WWII. The large antennas tended to collect lots of dead birds. This messy problem was found to be caused because the birds sat on the antenna and got cooked.

    I am a bit of a maven for polytopic geometry. I highly recommend Coxeter’s book on convex polytopes. Then if you want to really grab this business by the horns Conway & Sloane’s “Sphere Packing, Lattices and Groups” is recommended. This book gives a decent account of lattice systems, such as E_8 and the Leech lattice, and systems of quaterionions these imply. This then leads up to the Conway & Fischer-Griess group called the “Monster.”

    I think these structures are involved with quantum gravity, or in the lattice-tesselation of spacetime and AdS space. The vertices of the lattice system are roots of the gauge group. It is a sort of solid state physics analogue to gauge theory and gravity. I will leave things at this point, lest I am accussed of “theory mongering.”

    The occurrence of large crystals is largely a matter of energetics. In an adiabatic situation atoms will align into cyrstals because that is most energetically favorable. It is interesting to note that selenide crystals of truly astounding proportions were found in a mine in Mexico. There are pictures of the cave explorers literally crawliing on them and rappelling off them.

    Lawrence B. Crowell

  • http://tyrannogenius.blogspot.com Neil B ?

    This would be a great place to repost an adaptation of my comment from Uncertain Principles. Chad Orzel brought up the issue of Warnock’s Dilemma in a recent thread, http://scienceblogs.com/principles/2008/11/links_for_20081110.php. Wikipedia, the free encyclopedia:
    “[T]he problem of interpreting a lack of response to a posting on a mailing list, Usenet newsgroup, or Web forum. It occurs because a lack of response does not necessarily imply that no one is interested in the topic, and could have any one of several different implications, some of which are contradictory. Commonly used in the context of trying to determine why a post has not been replied to, or to refer to a post that has not been replied to.”

    My response is below, adapted to the current thread. BTW the discussion in the thread “What’s the Matter with Making Universes?” is directly pertinent to the one here, why not get some word in there too?

    I propose Bates’ Corollary to Warnock’s Dilemma: the problem of interpreting a lack of response to a comment in a thread and not just a post. I also propose Bates’ Ancillary Dilemma: why do respondents (repliers? sorry) address some of the key points made in a post or comment, and not others – even if the poster/commenter pleads or insists, and even repeatedly, that the unanswered points are relevant or even more relevant? I have in mind, in the thread http://scienceblogs.com/principles/2008/11/whats_the_matter_with_making_u.php#commentsArea, that [no one here AFAICT] would address my concern about why “collapse” (or whatever) happens so far downchain in the interaction of say a photon, instead of earlier. In particular, why doesn’t the interaction with an initial beamsplitter cause a photon to just collapse and go one way or the other, instead of indeed “splitting” the single photon wave to enable subsequent interference. But then, at the detectors at the far end of the MZ interferometer etc., there is a “hit” at one or the other detector. Er, maybe if [anyone, such as LBC, Terry B?] is reading this comment, you could reply to that question? I thank you in advance for your cooperation ;-) .

    BTW Lawrence I can be hard on people putting forth what I consider contrived and rationalized attempts to solve problems, which I still think fairly characterizes “decoherence” as a putative explanation of collapse <in general or even “apparent collapse” (whatever that means.) But I do not think you or others are arguing in bad faith or anything like that. I think you just feel too attached to a false hope that is alluring because it seems to resolve a vexing issue, and because the vagaries of meaning in talking about wave amplitudes, probabilities, etc,, lend themselves to misdirection and contrivance. BTW’, the whole idea of “entanglement” is that the mingled photon states literally don’t have a definite polarization in any individual sense, but the polarization is only established as a correlation upon later measurement. Hence I don’t see how entanglement can become a model or metaphor for collapse in general, which usually involves definite wavefunctions (such as 20 degree linear polarization as produced) collapsing into x or y, etc.

  • http://terrybollinger.com/ Terry Bollinger

    =========================================================

    A Quick Visual Intro to QED
    Terry Bollinger – 2008-11-23

    – Part 1 of 2 –

    1. The Question: Why Waves Here, and Particles There?

    On November 10, Neil Bates asked a difficult question as part of the Discover Magazine “Quantum Hyperion” physics thread. My paraphrasing of his question is this:

    “Why does a photon behave like a wave when it encounters beam splitter, but like a particle when it encounters a particle detector?”

    Below is my attempt to answer this question. Since this will be a bit long, I’ll break it up into two parts.

    The first part (this one) deals with the mystery of the coupling constants, or what I refer to as the roulette wheels down at the bottom of quantum mechanics.

    In the second part I’ll discuss the clockwork photon. This is my adaptation, with a few visualization updates, of Feynman’s explanation of QED. My goal in Part 2 is to show how geometry transforms the simple probabilities of coupling constants into the richness of the physical world that we see all around us.

    To make Part 2 more specific, I’ll include a thought experiment in which a single material, silver, both reflects a photon as if it were a wave, and in another part of the apparatus absorbs it as if it were a particle. Using a single material for both components emphasizes the critical role that geometry plays in understanding quantum mechanics.

    2. The Roulette Wheel at the Bottom

    The best non-mathematical reference to the question of why quantum mechanics sometimes gives wave-like results and sometimes particle-like results is, without qualification, Richard Feynman’s “QED: The Strange Theory of Light and Matter.” I recommend it highly for anyone interested in the more mysterious aspects of how quantum mechanics works.

    In his book, Feynman quickly informs the reader without apology that he will not try to explain why reality is ultimately probabilistic. His reason is simple: Although quantum mechanics enables very accurate predictions of how particles such as electrons and photons will interact when in large groups, there is no accepted theoretical explanation for the ultimate source of the probabilities that are intrinsic to such models.

    An analogy is that there is a sort of roulette wheel at the very bottom of the physics of electrons and photons. This wheel is spun every time we ask a question about a specific electron and a specific photon, but the details of its construction remain a complete mystery to us to this day.

    (To be complete, I should mention that there are actually several such roulette wheels in physics, which are collectively known as “coupling constants.” Only the coupling constant for electrons and photons has much impact on everyday physics, however, so that is the only constant I will discuss here.)

    Spinning the roulette wheel for an electron and a photon results in one of two outcomes: “interact” or “ignore.” (A warning: There are some complications in how these values are used. I’ll describe those complications later, in Part 2.)

    For photons and electrons, an accident of physics history bequeathed the corresponding roulette wheel with the highly uninformative name of “fine structure constant.” Fortunately, there is another name for it that is much more intuitive: It is the charge of an electron, expressed in certain universal units.

    Specifying how much electrical charge an electron has thus is just another way of describing the odds that the electron will interact with a passing photon. A point particle with no charge would ignore such a photon entirely, since its roulette wheel would be rigged only with slots marked “ignore.” Such a particle does exist. It is called the neutrino, and it is rigged in just this way. Because a neutrino cannot see photons, it passes through ordinary matter pretty much as if it wasn’t even there.

    The roulette wheel that corresponds to the charge of an electron has a surprisingly small number “interact” slots. The number is about 1 in 137, or less than 1%. This small probability is nonetheless just the right size to give rise to all of the remarkable complexity that we see and interpret as non-nuclear physics and chemistry.

    Finally, I cannot emphasize enough that the underlying design of these roulette wheels — these coupling constants of standard physics — is unknown. Attempts to postulate “gears and wheels” to explain these probabilities always seem instead to end up adding complexity without adding any new insights — the sure sign of a bad theory. This is one of those interesting cases in physics where the most mathematically abstract model, in this case a simple probability function, stubbornly remains the best one available. This is true both in terms of overall simplicity, and in terms of its ability to produce verifiable experimental predictions. The probabilistic nature of coupling constants thus remains a true mystery, one into which the physics of Feynman’s time (and I would argue ours also) produced no significant insights.

    2. Charge and the Anthropic Principle

    I should mention this seemingly arbitrary setting of the photon-electron roulette wheel at 1 in 137 is quite special in some unexpected ways. For example, if you increased it to 1 in 135 or lowered it to 1 in 138, it’s a pretty good bet we would not be having this dialog. The problem is that the ability of carbon to form indefinitely long chains is closely linked to this number, and if you change it even slightly, organic chemistry would likely stop working well enough to support the existence of constructions such as the proteins necessary for life.

    As it turns out, pretty much all of the fundamental constants of physics seem to work that way. That is, if you make these seemingly arbitrary numbers just a larger or a little smaller, you still get a universe of some sort, but one that no longer supports organic life as we know it. Or to put it a bit more graphically, nudging fundamental constants is a lot like kicking the foot of a juggler who has ten plates and twelve hoops all spinning at once: Everything comes tumbling down.

    This curious link between fundamental physics and life-supporting organic chemistry is called the anthropic principle, and it is one of the most fascinating mysteries of current physics. It is a topic for another time, however. I just did not want to leave an incorrect impression that the value of the electron charge could have been set arbitrarily to almost any value. It is instead fine-tuned in ways that are unexpected and deeply interwoven with the other fundamental constants of physics. Developing a full and convincing explanation of this fine-tuning constitutes one of the great ongoing challenges of fundamental physics.

    – End of Part 1 –

    =========================================================

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »