Avignon Day 3: Reductionism

By Sean Carroll | April 21, 2011 3:40 am

Every academic who attends conferences knows that the best parts are not the formal presentations, but the informal interactions in between. Roughly speaking, the perfect conference would consist of about 10% talks and 90% coffee breaks; an explanation for why the ratio is reversed for almost every real conference is left as an exercise for the reader.

Yesterday’s talks here in Avignon constituted a great overview of issues in cosmological structure formation. But my favorite part was the conversation at our table at the conference banquet, fueled by a pretty darn good Côtes du Rhône. After a long day of hardcore data-driven science, our attention wandered to deep issues about fundamental physics: is the entire history of the universe determined by the exact physical state at any one moment in time?

The answer, by the way, is “yes.” At least I think so. This certainly would be the case is classical Newtonian physics, and it’s also the case in the many-worlds interpretation of quantum mechanics, which is how we got onto the topic. In MWI, the entirety of dynamics is encapsulated in the Schrodinger equation, a first-order differential equation that uniquely determines the quantum state in the past and future from the state at the present time. If you believe that wave functions really collapse, determinism is obviously lost; prediction is necessarily probabilistic, and retrodiction is effectively impossible.

But there was a contingent of physicists at our table who were willing to believe in MWI, but nevertheless didn’t believe that the laws of microscopic quantum mechanics were sufficient to describe the evolution of the universe. They were taking an anti-reductionist line: complex systems like people and proteins and planets couldn’t be described simply by the Standard Model of particle physics applied to a large number of particles, but instead called for some sort of autonomous description appropriate at macroscopic scales.

No one denies that in practice we can never describe human beings as collections of electrons, protons, and neutrons obeying the Schrodinger equation. But many of us think that this is clearly an issue of practice vs. principle; the ability of our finite minds to collect the relevant data and solve the relevant equations shouldn’t be taken as evidence that the universe isn’t fully capable of doing so.

Yet, that is what they were arguing — that there was no useful sense in which something as complicated as a person could, even in principle, be described as a collection of elementary particles obeying the laws of microscopic physics. This is an extremely dramatic ontological claim, and I have almost no doubt whatsoever that it’s incorrect — but I have to admit that I can’t put my objections into a compact and persuasive form. I’m trying to rise above responding with a blank stare and “you can’t be serious.”

So, that’s a shortcoming on my part, and I need to clean up my act. Why shouldn’t we expect truly new laws of behavior at different scales? (Note: not just that we can’t derive the higher-level laws from the lower-level ones, but that the higher-level laws aren’t even necessarily consistent with the lower-level ones.) My best argument is simply that: (1) that’s an incredibly complicated and inelegant way to run a universe, and (2) there’s absolutely no evidence for it. (Either argument separately wouldn’t be that persuasive, but together they carry some weight.) Of course it’s difficult to describe people using Schrodinger’s equation, but that’s not evidence that our behavior is actually incompatible with a reductionist description. To believe otherwise you have to believe that somewhere along the progression from particles to atoms to molecules to proteins to cells to organisms, physical systems begin to violate the microscopic laws of physics. At what point is that supposed to happen? And what evidence is there supposed to be?

But I don’t think my incredulity will suffice to sway the opinion of anyone who is otherwise inclined, so I have to polish up the justification for my side of the argument. My banquet table was full of particle physicists and cosmologists — pretty much the most sympathetic audience for reductionism one can possibly imagine. If I can’t convince them, there’s not much hope for the rest of the world.

CATEGORIZED UNDER: Science, Travel
  • Anchor

    I agree entirely with Sean. Those at that table advocating ‘something else’ or ‘something more’ for macroscopic complexity have the burden of showing that any microscopic laws of physics are violated or otherwise surrender their sufficient role in emergent complexity to ‘whatever else it is’, in other words, WHY the microscopic laws of physics (known and as yet unknown by us) isn’t enough. AND they have to identify – at least describe – what ‘it’ is supposed to be.

    The notion is nothing but speculation. Worse, it has the strong odor of intelligent design about it. Just another fantasy-trip that postulates the existence of a phenomenon or principle simply because we DON’T understand exactly how it all works in every particular.

  • Bob McElrath

    To misquote Rutherford, “Reductionism is the only real science. The rest are just stamp collecting.”

  • Pieter Kok

    I am not sure about emergent laws that are inconsistent with the Standard Model, but consider the following: In QFT the propagator of a massive scalar particle decays exponentially for spacelike separations. While it is thus technically never zero (and therefore would allow some form of superluminal signalling), we are OK with that because the decay is exponential and any finite size signal quickly requires unmanageable amounts of resources. It is the exponential scaling that make this palatable.

    Similarly, the explanation of an emergent law in terms of the Standard Model requires some algorithm (how else would you characterize an explanation with sufficient abstraction?). To me, it is not sufficient to say that the Standard Model explains emergent law X if all we know is that such an algorithm exists. We must also be able to follow the algorithm step by step, and this is where the computational complexity of the algorithm comes in. Just as in the case of the propagator between spacelike separated events, if the algorithm has exponential complexity the explanation is intractable, and therefore not really an explanation at all. In that sense I consider myself not a reductionist.

  • http://catastrophicforgetting.blogspot.com/ T.

    Great to hear of this behind the scene discussion, thanks Sean. Turns out I was in the Palais des Papes just 2 weeks ago for the first time. I wouldn’t have thought of holding a scientific conference there. With hindsight the PONT organisers have got a point there (Hope they restored the “high kitchen” for the occasion). Two points:

    1. “an anti-reductionist line: complex systems like people and proteins and planets couldn’t be described simply by the Standard Model of particle physics applied to a large number of particles, but instead called for some sort of autonomous description appropriate at macroscopic scales.”

    i thought planets indeed had such an autonomous description, the evolution of which was governed precisely by a set of macroscopic-scale rules called general relativity.
    Would you say that reconciling Quantum physics with GR would put a nail in the intellectual coffins of your anti-reductionist colleagues? If yes, then on the other hand the fact that there are apparently deep contradictions between these theories might count as evidence for anti-reductionism..

    2. “If I can’t convince them, there’s not much hope for the rest of the world.” That’s exagerated I think: this crowd presumably also has higher standards of argumentation/skepticism. Maybe there’s no other way to know but to write the book.

    Anecdote: at the entrance of the palais, just after the casheers, one can see “e pluribus unum” enscribed on the stone ceiling. Capitole Hill appears to have borrowed something from Avignon.

  • http://wavefunction.fieldofscience.com Curious Wavefunction

    Sure, but ultimately I don’t think that reductionism ‘in principle’ is really consequential for sciences like biology and economics which rely extensively on model building. This is related to the debate Freeman Dyson had with Steven Weinberg in the 90s. If interested, for more see my post Dirac, Bernstein, Weinberg and the limits of reductionism.

  • Ben Maughan

    I also agree with Sean’s reductionist view, but the part that really fascinates me is how we connect (even in principal) the microscopic laws to our experience of free will. I’m no expert in this area, but I enjoy turning it over in my mind – can one reconcile this reductionist view with free will, or must we believe in a fully deterministic Universe, in which all of our decisions are predetermined by the microscopic laws?

  • Markk

    I think your position is inconsistent with the holographic principle and/or information rules. You are saying that all the information in the entire universe for all time is inside a space-time surface (“now”, defined somehow). To me if that is true then the holographic principle must be false – there is more information inside that surface than the HP says there can be. I also don’t know what you mean by “one moment in time” related to the universe. How are you defining that given GR? The wave function of the universe does not make sense in a GR context does it? That is one reason they are inconsistent right? What you say to me isn’t a statement that could be right or wrong without some more definition.

    On the information side, isn’t what you are saying a hidden variable argument? In many worlds we never know which world i.e which experimental result we will get – in principle – and so this is information which we (and no-one or nothing anywhere) can have.

    That is my initial thoughts.

  • Joshua

    I think one can adhere to strict Copenhagen-ism and still be anti-reductionist. The idea that a superimposed waveparticle that has a wavefunction that collapses 50% of the time to A and 50% to B is just as reductionist as saying that 50% of the worlds are A and 50% of the worlds are B. The problem is that people think that reductionism answers the “Why?” to determinism’s “How?” I think that’s an entirely too teleological approach to the questions. Reductionism simply says that all processes are reducible to physical laws. You don’t get to complain that the physical law lacks a degree of teleology simply because you want there to be some. If you observe the waveparticle to be B and demand to know “Why not A?” that’s a question that is totally independent of reductionism.

  • Mike

    I don’t think there’s a problem between accepting the MWI and not accepting Reductionism as the whole answer. Here is a quote from David Deutsch, surely one of today’s strongest proponents of the MWI, regarding Reductionism:

    “A reductionist thinks that science is about analyzing things into components. An instrumentalist thinks that it is about predicting things. To either of them, the existence of high-level sciences is merely a matter of convenience. Complexity prevents us from using fundamental physics to make high-level predictions, so instead we guess what those predictions would be if we could make them– emergence gives us a chance of doing that successfully– and supposedly that is what the higher-level sciences are about. Thus to reductionists and instrumentalists, who disregard both the real structure and the real purpose of scientific knowledge, the base of the predictive hierarchy of physics is by definition the ‘theory of everything.’ But to everyone else scientific knowledge consists of explanations, and the structure of scientific explanations does not reflect the reductionist hierarchy. There are explanations at every level of hierarchy. Many of them are autonomous, referring only to concepts at that particular level (for instance, ‘the bear ate the honey because it was hungry’). Many involve deductions in the opposite direction to that of reductive explanation. That is, they explain things not by analyzing them into smaller, simpler things but by regarding them as components of larger, more complex things– about which we nevertheless have explanatory theories.

    For example, consider one particular copper atom at the tip of the nose of the statue of Sir Winston Churchill that stands in Parliament Square in London. Let me try to explain why that copper atom is there. It is because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honor such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, an so on. Thus we explain a low-level physical observation– the presence of a copper atom at a particular location– through extremely high-level theories about emergent phenomena such as ideas, leadership, war and tradition.

    There is no reason why there should exist, even in principle, any lower-level explanation of the presence of that copper atom than the one I have just given. Presumably an active ‘theory of everything’ would in principle make a low-level prediction of the probability that such a statue will exist, given the condition of (say) the solar system at some earlier date. It would also in principle describe how the statue probably got there. But such descriptions and predictions (wildly infeasible, of course) would explain nothing. They would merely describe the trajectory that each copper atom followed from the copper mine, through the smelter and the sculptor’s studio, and so on. They could also state how those trajectories were influenced by forces exerted on surrounding atoms, such as those compromising the miners’ and the sculptor’s bodies, and so predict the existence and shape of the statue. In fact such a prediction would have to refer to atoms all over the planet, engaged in the complex motion we call the Second World War, among other things. But even if you had the superhuman capacity to follow such lengthy predictions of the copper atom’s being there, you would still not be able to say, ‘Ah yes, now I understand why it is there.’ You would merely know that its arrival there in that way was inevitable (or likely, or whatever), given all the atoms’ initial configurations and the laws of physics. If you wanted to understand why, you would still have no option but to take a further step. You would have to inquire into what it is about that configuration of atoms, and those trajectories, that gave them the propensity to deposit a copper atom at this location. Pursuing this inquiry would be a creative task, as discovering new explanations always is. You would have to discover that certain atomic configurations support emergent phenomena such as leadership and war, which are related to one another by high-level explanatory theories. Only when you knew those theories could you understand fully why that copper atom is where it is.

    In the reductionist world-view, the laws governing subatomic particle interactions are of paramount importance, as they are the base of the hierarchy of all knowledge. But in the
    real structure of scientific knowledge, and in the structure of our knowledge generally, such laws have a much more humble role.”

  • Jason Dick

    Yes, to me, Sean, this is positively insane. Here is my short argument as to why this anti-reductionism is insane:

    If macroscopic systems behave in some manner that cannot (even in principle) be derived from the microscopic laws of physics, then that means that at least some small subsets of that macroscopic system must not be following the microscopic laws of physics. Because if all of the particles that make us up are, individually, behaving based upon the microscopic laws of physics, then by definition the total behavior is described by those same microscopic laws applied to a larger system.

    So what they are asking for is pretty absurd on its face: microscopic laws of physics that behave differently (and not just slightly) depending upon whether an atom is part of a larger configuration or not. This predicts that we should be able to examine progressively larger and more complex systems, and at some point find one where the individual parts of the larger system no longer follow microscopic laws.

    But here’s the kicker: if this happens, then it means that the microscopic behavior is being affected in a way that the interactions in the microscopic laws of physics that we know do not describe. Thus this is a statement that the microscopic laws of physics are wrong, and we aren’t taking into account the effect of some interaction or other with other particles. So we could, if we had perfect knowledge, write down new microscopic laws of physics that take these interactions into account, and have the same laws that describe our universe at all levels.

    Thus, fundamentally, what they are arguing for is a standard model that is wrong, and that extra long-distance interaction terms have to be incorporated to correct it. This seems rather absurd to me.

  • Dave

    Yeah, I’m definitely confused. How do you know which world you are going to end up in the MWI?

  • Braden B.

    “My best argument is simply that: (1) that’s an incredibly complicated and inelegant way to run a universe, and (2) there’s absolutely no evidence for it.”

    Your second claim is arguably incorrect. In “More really is different” (Physica D: Nonlinear Phenomena, Volume 238, Issues 9-10, 15 May 2009, Pages 835-839 or here on the arxiv: http://arxiv.org/abs/0809.0151), Gu et al. demonstrate that there indeed exist systems for which macroscopic observables cannot be computed, even in principle, from the microscopic state of the system. In particular, they study the infinite periodic Ising lattice, and show that in the ground state even quantities such as magnetization, correlation length, finite-range correlations or the zero temperature partition function cannot be computed from knowledge of the microscopic hamiltonian.

    There is the caveat that this is an infinite system, but much of our theoretical understanding of Nature comes from studying infinite systems or the continuum limit. Our theoretical understanding of phase transitions relies on studying infinite systems, for instance.

    So, this result provides some evidence for the claim that it may not be possible, even in principle, to compute all macroscopic properties of a system from knowledge of the microscopic properties.

  • Mike

    Jason,

    I agree with your comment. However, the question whether macroscopic systems behave in some manner that cannot (even in principle) be derived from the microscopic laws of physics is only one level of explanation. To my way of thinking, the other important question is whether once (even in principle) the behavior of physical systems are understood in this way, are we left with a complete explanation of what has occurred and why? I think that was the point the Deutsch was getting at.

  • Mike

    Dave,

    “How do you know which world you are going to end up in the MWI?”

    If one accepts the MWI, the short answer is in all of those initially correlated, and fewer of over time.

  • http://scientopia.org/blogs/galacticinteractions Rob Knop

    Accepting MWI is already a stretch…. I have to admit myself that while scientifically it’s in the “I dunno” category, I believe (suspect?) that the Universe is fundamentally not deterministic, but stochastic. I don’t know if that makes me a Copenhagenist or not, but that’s my suspicion.

    However, that’s a minor quibble on the larger issue, which is whether emergent behaviors at larger scales (which indubitably exist) are not even **in principle** derivable from the laws of physics at smaller scales.

    There’s a deeper issue about the philosophy of what science is, though. Science aims at describing physical reality. But, if we’re to be honest, what we’re doing is making models that allow us to make predictions about physical reality. Do we really know that we’re right? In fact, we know that we’re not right, because for our fundamental theories, we can find regimes where they don’t work. Does that mean that in principle we can’t come up with a theory that does work everywhere? No… but at the moment, we certainly don’t have one.

    Given that, the MWI vs. Copenhagen thing becomes something of a red herring. The real result of quantum mechanics is that we can predict the probabilities for the results of (say) an electron spin experiment, but not what that spin will be measured to be. Whether that’s because there’s wavefunction collapse and the Universe is stochastic, or because the Universe splits, in some sense doesn’t matter. Each “you” (if there is more than one) measures a given spin, and there’s no way to figure out ahead of time which one that “you” is going to measure. Whether the Universe is MWI and exploring all possible outcomes, or whether it’s Copenhagen and performing an on going Monte Carlo experiment, in the end quantum mechanics really is just a mathematical model that does a wonderful job for us of calculating probabilities for the results of experiments (where “experiments” include any physical interaction, not just things done in a lab by people in white coats).

    Given that our theories are mathematical models, and that they all admittedly have a range of application, one could argue that it’s pure philosophical bias to assume that one will always be able to derive the theories that describe the behavior of macroscopic systems from the theories that describe the behavior of microscopic systems. Unless you really believe that your theory is Truth, instead of an extremely useful mathematical model, there’s no reason to suppose that that even *should* be possible.

  • Mike

    Rob,

    Although you’re clearly not a MWI fan, here are a few words from Deutsch on prediction and instrumentalism:

    “Our best theory of planetary motions is Einstein’s general theory of relativity, which, in the early twentieth century, superseded Newton’s theories of gravity and motion. It correctly predicts, in principle, not only all planetary motions but also all other effects of gravity to the limits of accuracy of our best measurements. For a theory to predict something “in principle” means that as a matter of logic the predictions follow from the theory, even if in practice the amount of computation that would be needed to generate some of the predictions is too large to be technologically feasible, or even too large to be physically possible in the universe as we find it.

    Being able to predict things, or to describe them, however accurately, is not at all the same thing as understanding them. Predictions and descriptions in physics are often expressed as mathematical formulae. Suppose that I memorise the formula from which I could, if I had the time and inclination, calculate any planetary position that has been recorded in the astronomical archives. What exactly have I gained, compared with memorising those archives directly? The formula is easier to remember – but then, looking a number up in the archives may be even easier than calculating it from the formula. The real advantage of the formula is that it can be used in an infinity of cases beyond the archived data, for instance to predict the results of future observations. It may also state the historical positions of the planets more accurately, because the archives contain observational errors. Yet, even though the formula summarises infinitely more facts than the archives do, it expresses no more understanding of the motions of the planets. Facts cannot be understood just by being summarised in a formula, any more than by being listed on paper or memorised in a brain. They can be understood only by being explained. Fortunately, our best theories contain deep explanations as well as accurate predictions. For example, the general theory of relativity explains gravity in terms of a new, four-dimensional geometry of curved space and time. It explains how, precisely and in complete generality, this geometry affects and is affected by matter. That explanation is the entire content of the theory. Predictions about planetary motions are merely some of the consequences that we can deduce from the explanation.

    Moreover, what makes the general theory of relativity so important is not that it can predict planetary motions a shade more accurately than Newton’s theory can. It is that it reveals and explains previously unsuspected aspects of reality, such as the curvature of space and time. This is typical of scientific explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself. As we shall see, one of the most valuable, significant and also useful attributes of human thought generally, is its ability to reveal and explain the fabric of reality.

    Yet some philosophers, and even some scientists, disparage the role of explanation in science. To them, the basic purpose of a scientific theory is not to explain anything, but to predict the outcomes of experiments: its entire content lies in its predictive formulae. They consider any consistent explanation that a theory may give for its predictions to be as good as any other, or as good as no explanation at all, so long as the predictions are true. This view is called instrumentalism (because it says that a theory is no more than an “instrument” for making predictions). To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations, is a fallacy and a conceit. They do not see how anything that a scientific theory may say beyond predicting the outcomes of experiments can be more than empty words. Explanations, in particular, they regard as mere psychological props: a sort of fiction which we incorporate in theories to make them more memorable and entertaining. The Nobel prize-winning physicist Steven Weinberg was in an instrumentalist mood when he made the following extraordinary comment about Einstein’s explanation of gravity:

    “The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons [as in pre-Einsteinian physics] or to a curvature of space and time.”

    Weinberg and the other instrumentalists are mistaken. It does matter what we ascribe the images on astronomers’ photographic plates to. And it matters not only to theoretical physicists like myself, whose very motivation for formulating and studying theories is the desire to understand the world better. (I am sure that this is Weinberg’s motivation too: he is not really driven by an urge to predict images and spectra!) For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology “oracle” which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And if it predicted that the spaceship we had designed would explode on takeoff, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then could we have any chance of discovering what might cause an explosion on takeoff. Prediction – even perfect, universal prediction – is simply no substitute for explanation.”

  • http://www.scientopia.org/blogs/galacticinteractions Rob Knop

    …but can you be sure that that 4-dimensional curved spacetime is “real” on a deep fundamental level as opposed to being a model, an approximation to reality? Indeed, the inconsistency with QFT tells us that GR probably can’t be completely right.

    I fully appreciate and understand the value of explaination as opposed to mere prediction. A good theory gives you understanding about how things work, gives you intuition about what sorts of things can happen. However, we shouls not mistake the elegance and depth of our theories for evidence that they’re fundamental truth. They are far more than black boxes that tell us the results of experiments… but at the end of the day, they may just be useful models, and indeed with the theories we have today, we can be sure that some of them are exactly that. It may be that we’re all Kepler, just makung better and better models.

  • Mike

    Rob,

    Our best theories will always be an approximation (or “model” if you wish) of reality. As we address new problems and find new solutions, this will lead to new questions — not some final destination where all is revealed. But no one was arguing that point — nothing will ever be “completely right.” What we can achieve is an ever improving stream explanations that has infinite reach, subject only to the laws of physics, which impose no upper boundary to what we can eventually understand, control, and achieve.

  • Rich C

    Doesn’t there need to be a distinction between a higher-level explanation being “in principle derivable from” a lower-level explanation, and a higher-level explanation being “consistent with” a lower level explanation? The first demands that the lower-level or fundamental explanation (or laws) be complete: that there are no other laws or causes not incorporated into the low-level explanation necessary to explain the higher-level phenomena. The second only demands that, whatever explanation is actually required for the high-level phenomena, it cannot be in contradiction with the lower-level laws. The consistency criterion seems to me to be much less demanding, and perhaps less controversial than the completeness criterion, though Braden B points to a paper that may provide an example challenging the viability of the consistency criterion. If you don’t keep these two different ways of thinking about what reductionism demands straight, I’m not surprised that the conversation seemed muddled and unsatisfactory.

  • Carl

    Although my head tells me to be a reductionist, I think that there is one way in which the anti-reductionist argument can be made logically sound *WITHOUT* appealing to either supernatural intervention from above (which includes, given our current state of knowledge, the ill-defined “free will”); nor to a miracle from below in which the macroscopic laws contravene or contradict the microscopic ones. The loophole is this:

    Consider whether it is possible that the macroscopic laws are completely consistent with the microscopic ones… but that the microscopic laws permit more than macroscopic history to emerge from the same microscopic conditions. Why is this odd-sounding idea even worth considering? Let me offer a couple of reasons, and even a possible mechanism.

    First, we know it does happen at the microscopic level: every quantum event is inherently indeterministic (even if you subscribe to MWI, you have no way of predicting which universe you will end up in). Of course, conventional wisdom is that quantum indeterminacy washes out by the time you get to the macroscopic, but it’s worth reminding ourselves that we don’t actually have a good explanation for how — or at what scale — this actually happens, i.e. the infamous Measurement Problem (and MWI is no less flawed here than is Copenhagen).

    Second, when you stop to think about it, multiverse theories of cosmology are already saying this, in more than one way. Whether it’s symmetry breaking in the early universe, or inflation blowing up quantum fluctuations to macroscopic scale, modern cosmology implicitly depends on more than one macroscopic outcome emerging from the same microscopic initial conditions.

    So how could this work in conditions less extreme than the birth of a new universe? One admittedly speculative mechanism could be the interaction of strong mixing and quantum events including random fluctuations. I suggest that under certain very specific circumstances, a quantum fluctuation can be “captured” by strong mixing, and be blown up to the classical level. This is essentially what happens with inflation, and it could happen on a much smaller scale too. Now in nearly all cases the quantum event won’t interact with strong mixing; and in those that do, in nearly all cases the strong mixing won’t be unstable and will die out; but if the quantum event happens close enough to the chaotic boundary of a complex system, it could push it into a different macroscopic outcome. (As an analogy, consider what happens with Hawking radiation when a pair of virtual particles is created just at the event horizon of a black hole).

    So to summarize, if this is feasible then we have:
    - Macroscopic and microscopic laws that are completely consistent
    - Macroscopic laws that can, in principle, be derived from the microscopic
    - Macroscopic *outcomes* that can, in certain very specific instances, at best be predicted only probabilistically from the microscopic laws
    - Paradoxically, macroscopic outcomes that can be deterministically predicted by macroscopic laws, at least so long as another mini-inflation event doesn’t kick them from one semi-stable state to another

    …and best of all, we even have the largest possible working example in the form of inflation, where the classical evolution of the universe we find ourselves in is the deterministic outcome of quantum fluctuations blown up to cosmic scale.

  • Dave

    I agree with Rob. Also, I don’t understand Mike: My consciousness only observes one reality at a time, not “all of them”. What scientific theory tells me which one I am going to be in next (other than giving me a probability distribution)?

  • Low Math, Meekly Interacting

    I think the only justifiable position is an agnostic one, since, obviously, there’s no hard data either way. But, since we have to have provisional biases to even choose an angle of attack, so be it. Let the reductionists and the holists (or whatever) duke it out and see who’s right.

    What makes me a tad uncomfortable is the confidence displayed on either side. I just think it would be really interesting if there truly were “new laws” that drive emergence, rather than the the more prosaic perspective of the reductionists. I happen to think the reductionists are probably right. However, the problem of emergence does lead to the concern that being right in this context does one little good, i.e. if the computer that can realistically model a cell starting from Schrödinger’s equation must be as complex as the cell itself, such an approach is rather pointless, and you’re stuck having to just observe the cell directly. Even if there were no “fundamental” laws of complexity, but the holists came up with better models of complexity that were more general and predictive than what we have now in their pursuit of such laws, being “wrong” doesn’t matter.

    What’s great its that all of this is testable. For that very reason alone, it’s a worthwhile debate.

  • AnotherSean

    Interesting. I certainly don’t believe in the wave function collapse in the traditional sense. It seems to me its an old fashioned way of talking about decoherence. What determines the manner of decoherence, is a bunch of accidents, and it seems to me these historical circumstances can be viewed as probabilistic events in the manner described by Bohr.

  • Low Math, Meekly Interacting

    I guess anyone hoping to be persuasive in a debate like this must play devil’s advocate in the mirror and try to answer this question for themselves: How is it that there isn’t even a hint of the simplest self-organizing system evident in Schrödinger’s equation. Nor in Newton’s laws of motion, for that matter. How do you get from Schrödinger’s equation to ANY microscopic system that displays self-organizing behavior? Why, really, is this so hard? Saying it’s too complex is a tautology. So what’s the answer?

    Maybe it’s just due to lots of accidents which are hard to keep track of, but not at all profound. But how do we KNOW that?

  • Mike

    Dave,

    “My consciousness only observes one reality at a time, not “all of them”. What scientific theory tells me which one I am going to be in next (other than giving me a probability distribution)?”

    Generally, the MWI takes the view that you can’t predict with certainty which world you’ll end up in, since there was no reason “why” you ended up in this world, rather than another – you end up in a vast number of quantum worlds. It is an artifact of your brain and consciousness being differentiated, that makes your experiences seem singular and random at the same time. The randomness apparent in nature is a consequence of the continual differentiation into mutually unobservable worlds. And, since reality is quantum, and not classical, even if you knew the initial positions of everything in “your world” from the some arbitrary “beginning”, the randomness emerging as a consequence of such differentiation would still exist.

  • http://www.damtp.cam.ac.uk/people/e.lim/ Eugene

    I think it is a false dichotomy to separate philosophies into “reductionist” and “anti-reductionist” camps. It seems to rely on the assumption that each layer of the “fundamental laws” become “simpler”. Historically this may be true, but I think extrapolation is unjustified.

    There is the whole thing about current known laws being ‘emergent’ — I don’t have a strong opinion either way, but I am ready to keep an open mind about this…

  • fco.

    “If you believe that wave functions really collapse, determinism is obviously lost; prediction is necessarily probabilistic, and retrodiction is effectively impossible.”

    Sorry for the noob question here, but is it possible that wave functions really collapse, and what we see as probabilistic is actually just an incomplete picture of laws we don’t know yet?

    • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

      fco– We don’t think that’s possible, although of course it’s hard to be sure. You would need a “hidden variable theory,” and those seem unsatisfactory for various reasons. But it’s okay to keep an open mind.

  • Dave

    Thanks Mike. So you say its random which reality I observe? I think there is quite a big gap here which it doesn’t seem like there is an answer for. How do you know it really is random, and not directed by some unobserved control mechanism?

    Fascinating stuff.

  • Mike

    Dave,

    You could always say it was directed by some unobserved control mechanism, or by spirits or a God, or by a programmer and that we live in a matrix.

    The main problem with these types of explanations is that they can be used for explaining any crackpot theory one could possibly posit. My favorite non-explanation explanation is: “the wizard did it”. Take a look at this brief video from a recent TED conference where Deutsch addresses just this point:

    http://www.youtube.com/watch?v=folTvNDL08A

    Let me know what you think.

  • http://omegaleague.com Alpha Omega

    When discussing issues like this, maybe physicists need the language of Kolmogorov complexity, computability and algorithmic information theory, or what I like to call the “computational cosmos” models. According to these models, a theory of everything is the algorithm with the shortest description length (smallest Kolmogorov complexity) which outputs the observed state of the universe. Essentially it is a mathematical formalization of induction using Ockham’s Razor. Marcus Hutter has an interesting paper about TOE’s using this approach: http://arxiv.org/abs/0912.5434

  • http://www.qwertyous.blogspot.com/ John R Ramsden

    If one accepts MWI, then there must be many distinct ways for a universe to branch from any given state to an identical subsequent state (with higher entropy, so there’s no contradication there). But that seems to contradict the idea that the wave function of the universe (assuming it exists) could be tracked back by the Schrodinger equation linearly, and hence uniquely, from the later state to the earlier state.

    I suppose the question becomes what constitutes “the universe” in this context? I’d say it’s the union of all causally connected patches bounded by an anti-de-Sitter region, which of course from our standpoint must be incomparably larger than the observable universe (if that makes sense).

    Regarding the wave function, has anyone tried going to the opposite extreme of Everett and postulating that it’s apparent smoothness is merely the net effect of a host of constantly varying “spikes” for want of a better word. That would be somewhat analogous to an amplifier’s sound level LEDs which, despite being discrete oscillating bars close up, collectively resemble from a distance a smoothly evolving curve.

    The temporary vanishing of a spike would represent a measurement of the system, and this would be either “internal”, while the system remained closed, or a conventional measurement by some other interacting system in the vicinity. Either way, the wave function could continue on its merry way, maintaining the same basic nature, without having to wear two hats so to speak (either smoothly evolving or momentarily vanishing)

  • http://www.catholiclab.net Ian

    A Philosophical Refutation of Reductionism – http://www.catholiceducation.org/articles/apologetics/ap0278.htm

    Enough said.

  • Redknapp

    I don’t think that reductionism ‘in principle’ is really consequential. For sciences like biology and economics. I think there is quite a gap here. I agree with Sean also.

  • karl

    Ian,

    “Enough said”

    Agreed

  • Sili

    Roughly speaking, the perfect conference would consist of about 10% talks and 90% coffee breaks; an explanation for why the ratio is reversed for almost every real conference is left as an exercise for the reader.

    Bladder control?

  • Doug

    Am I allowed to observe myself and collapse my own wavefunction?
    Is a proton allowed to observe itself and collapse its wavefunction?
    If the answer to question 1 is yes, and 2 is no, do we not have a difference at macroscopic compared to quantum levels?

  • Allen

    I would think that laws are laws, and the reductive vs. emergent distinction is irrelevant.

    If the fundamental laws are deterministic and the higher level laws are deterministic, then the entire history of the universe is *still* determined by its exact physical state at any one moment in time.

    This is true because the deterministic emergent laws are going to “emerge” predictably from the state of the more fundamental lawyer. So they will always act similarly in similar situations.

    Even if you switch from deterministic laws to probabilistic laws, the reductionist vs. emergent distinction is still irrelevant. The emergent laws just emerge “probabilistically” from the state of the more fundamental layer.

    Emergent laws are just another layer of what you already have…not something qualitatively different.

    I’d go further an argue that determinism vs. “probabilism” really isn’t that important either. In fact, determinism is a special case of probabilism…the case where all probabilities are either 0% or 100%.

    Maybe?

  • Breaking News

    On a different but interesting subject — there is word out of the LHC regarding the Higgs. Still internal, but interesting (via Peter Woit’s blog):

    Internal Note
    Report number ATL-COM-PHYS-2011-415
    Title Observation of a γγ resonance at a mass in the vicinity of 115 GeV/c2 at ATLAS and its Higgs interpretation
    Author(s) Fang, Y (-) ; Flores Castillo, L R (-) ; Wang, H (-) ; Wu, S L (University of Wisconsin-Madison)
    Imprint 21 Apr 2011. – mult. p.
    Subject category Detectors and Experimental Techniques
    Accelerator/Facility, Experiment CERN LHC ; ATLAS
    Free keywords Diphoton ; Resonance ; EWEAK ; HIGGS ; SUSY ; EXOTICS ; EGAMMA
    Abstract Motivated by the result of the Higgs boson candidates at LEP with a mass of about 115~GeV/c2, the observation given in ATLAS note ATL-COM-PHYS-2010-935 (November 18, 2010) and the publication “Production of isolated Higgs particle at the Large Hadron Collider Physics” (Letters B 683 2010 354-357), we studied the γγ invariant mass distribution over the range of 80 to 150 GeV/c2. With 37.5~pb−1 data from 2010 and 26.0~pb−1 from 2011, we observe a γγ resonance around 115~GeV/c2 with a significance of 4σ. The event rate for this resonance is about thirty times larger than the expectation from Higgs to γγ in the standard model. This channel H→γγ is of great importance because the presence of new heavy particles can enhance strongly both the Higgs production cross section and the decay branching ratio. This large enhancement over the standard model rate implies that the present result is the first definitive observation of physics beyond the standard model. Exciting new physics, including new particles, may be expected to be found in the very near future.

    See: http://cdsweb.cern.ch/record/1346326?

    Peters take is as follows: “A commenter on the previous posting has helpfully given us the abstract of an internal ATLAS note claiming observation of a resonance at 115 GeV. It’s the sort of thing you would expect to see if there were a Higgs at that mass, but the number of events seen is about 30 times more than the standard model would predict. Best guess seems to be that this is either a hoax, or something that will disappear on further analysis.”

  • TimG

    Jason Dick (#10), I find your argument somewhat persuasive. However, it seems to rely on the assumption that it is always meaningful to speak about the individual behaviors of the particles that make up a macroscopic system, and moreover that the combined descriptions of the behaviors of all of these particles describe all of the meaningful facts about the system as a whole.

    What about the case of an entangled state? If I have even two particles in a state like |0>|0> + |1>|1>, I can’t tell you whether either particle is in state |0> or state |1>, or any other state vector for that matter. I suppose I could describe the state of one of the particles by its reduced density matrix, but the phase difference between the two terms in the two-particle state isn’t a property of either particle, but rather of the system as a whole.

    I’m still sympathetic to reductionism, but I think it would be a lot easier to make the argument if we lived in a world of classical particles.

  • Yoav Golan

    If you forced me at gunpoint to decide between reductionism and non-reductionism, I would probably choose reductionism. That said, I think the debate is fairly pointless if the central concern is about what is possible in principle. Because we’re talking about the empirical/phenomenal world, here. In principle, in theory, hypothetically—these phrases don’t get us anywhere. You still have to do the work to show that one side I right and the other is wrong.

    Consider Boyle and the air pump. Was it possible to create a true vacuum? This was a question people debated in Britain ad nauseum. People would get into bar fights about it, posing the most sophisticated objections both against and in favor of each side. Nature abhors a vacuum, they said. The concept of a vacuum was philosophically nonsensical. But the fact of the matter was that people just didn’t know, at least not until Boyle went out and did the work necessary to prove it.

    Can human activity be reduced to the laws of quantum dynamics? I doubt it. Can it be reduced to some form of physical laws? That seems much more likely. But certain? No. Who knows what kind of curveballs the phenomenal world might throw at us? The fact of the matter is that we won’t know until someone goes out there and does the work to prove it one way or the other, empirically—i.e. we won’t know for a long, long time. I guess, what I’m saying is “talk away if you want,” because it’s going to be a while before this debate is actually settled.

  • Marty Tysanner

    In discussing reductionism and emergence, I think it is crucial to consider both the underlying laws and the particular configuration on which those laws act. As Sean and others have pointed out, whatever macroscopic laws exist should be fully consistent with the underlying microscopic ones (otherwise it is hard to see how one could meaningfully distinguish the microscopic and macroscopic laws — they would all just be “laws” with equivalent standing).

    However, microscopic laws need not imply a particular set of macroscopic laws. A good example is biology: the biological laws we know all govern organisms on Earth, but those organisms (indeed the existence of DNA) are products of both the microscopic laws and the initial conditions (Earth and its properties) plus boundary conditions (sun, moon and their relationship to Earth). I doubt many would argue that the microscopic laws imply that we have our particular Earth, sun and moon; otherwise, every star would have a solar system that looks just like ours. Hence, we must consider that our biology, and by extension our biological laws, largely depend on chance events (initial configuration of matter) that cannot be deduced from the laws alone.

    (Sure, one could find fault with the biology example and note that certain laws seem “universal” — natural selection and random mutation are obvious examples if one requires life to be self replicating in order to call it “life.” But in our ignorance we cannot arbitrarily rule out the possibility of other kinds of “organisms” that display some of the characteristics we would ascribe to life, such that random mutation and natural selection aren’t the primary laws that govern their existence. )

    I think Sean and others are thinking of reductionism primarily in terms of the underlying laws, i.e., QM and GR. Perhaps those who argue in favor of a different set of macroscopic laws are really arguing for emergent laws, i.e., laws that are consistent with, but not implied by, microscopic laws. That is, the disagreement may stem more from misunderstandings, possibly because the parties involved have not have carefully stated exactly what they mean by “macroscopic laws.”

    On the other hand, if there are serious physicists and cosmologists who think there are macroscopic laws that are inconsistent with microscopic ones, then I really wouldn’t know what to say to those people… It certainly seems weird to think that a law could appear out of nowhere which governs a large collection of particles whose microscopic behaviors are determined by microscopic laws, and yet is not itself constrained by microscopic laws.

  • John

    “Note: not just that we can’t derive the higher-level laws from the lower-level ones, but that the higher-level laws aren’t even necessarily consistent with the lower-level ones.”

    Sean,

    If your interlocutors were arguing that higher-order processes are not consistent with lower-order processes (that is, that emergents do not strictly supervene upon their basal realizers) then they are not arguing for non-reductive physicalism in the sense excepted by the majority of modern philosophers.

    On the other hand, if you are suggesting, as you seem to be, that emergents are not derivable (that is, neither fully explainable nor predictable) without remainder from their basal realizers, then you are arguing for non-reductive physicalism (that is, for emergence) in its modern sense.

    This is a highly contentious area of philosophy at the moment with incredibly sophisticated arguments on both sides relating to causation, individuation, functionalism, basic ontology, multiple realizability, etc.

    The general intuition is that the subjects of the special sciences (biology, psychology, economics, etc) are not reducible without remainder to descriptions of elementary particles and their relations. If this were the case, then we would seem to be forced into some form of eliminativism regarding most of our common sense beliefs.

  • diogenes

    Arguments such as these are why so many people, often ESPECIALLY other academics, hate physicists (especially theoreticians) so much. You could have just the same discussion at home, come to the same conclusions (read “none”) except that a) your lunch mates might not be as famous b) the wine and architecture probably not as interesting. You could probably have the same disucssion in the uni cafeteria with any randomly chosen first year philosophy major….

  • http://www.sunclipse.org Blake Stacey

    Rob Knop wrote:

    …the MWI vs. Copenhagen thing becomes something of a red herring.

    Just like Communism.

    I think one could be a consistent-historian like Gell-Mann or Hohenberg, a latter-day Copenhagener like Peres, a relationalist after Rovelli, a correlationalist after Mermin or a QBist in the tradition of Fuchs while still accepting the idea that emergent phenomena must be consistent with the underlying laws on which coarse-grained descriptions supervene.

  • Tom S

    I am so out of my element :-)
    Just wondering how Heisenberg’s Uncertainty Principle fits into the picture?

  • Pete

    I am definitely in agreement with Sean on reductionism being the ultimate way to understand reality. Braden, as far as the paper “More really is different” is concerned, it really doesn’t damage the case for reductionism at all. The fact that some certain things are considered undecidable in computation doesn’t mean that reductionism is false, and the fact that it is an infinite system automatically negates the results conclusions for our finite observable universe. Any scientist that thinks reductionism is wrong is going against a much more fundamental notion of causality, as reductionism is squarely tied to the notion that every cause precedes an effect. Theoretically, one could deduce the rest of the universes history from initial conditions and the laws of physics.

  • Albert Zweistein

    Alright you Strict Reductionists – Time for a test.

    1. Explain in words, pictures and/or diagrams the well-observed phenomenon of human ontogeny (consult Wikipedia if you draw a blank).

    2. Now explain in full detail how ontogeny unfolds using only theoretical atomic and subatomic physics.

    Extra credit: Explain the remarkable observation that ontogeny recapitulates phylogeny using only QCD.

    Good luck you wild and crazy Platonists! You cannot possibly imagine how eagerly I await your blue books, or the 10^500 of them you will need for your attempted answer.

    High marks for those who realize early-on that they are on a fool’s errand.

    Best,
    Albert Z

  • Daniel

    “every quantum event is inherently indeterministic”

    I have never been able to buy this. Everything is determined, though some events are unpredictable.
    In the case of rain, we can’t predict where each drop will fall; but if we had sensors that could detect each drop of rain and it’s trajectory, as well as information on the wind strength at every point in the air, and a detailed map of the ground, we could say exactly where they would all end up. Practically, it’s impossible. In principle, it’s unpredictable yet deterministic.

    The same goes for any event on the macro or microscopic scale. We would need technology that we can’t even conceive at this point in time to measure subatomic particles to a degree that we could predict what they would do at any future time. Maybe that technology is in practice impossible to achieve. But in principle, if we had the information, we could predict any event. The fact that we can’t predict it doesn’t make it indeterministic.

  • Allen

    Daniel,

    On trajectories, Bernard d’ Espagnat has an interesting discussion of the quantum mechanical treatment of “particle traces” seen in cloud chambers in “On Physics and Philosophy”, pg. 95:

    “At first sight, as we noted, these alignments of bubbles seem comparable to the white trails produced by a jet plane in a blue sky. Hence, when the ‘particle’ source is external to the chamber (or the emulsion), we not only attribute to each ‘particle’ inside this device a well-defined trajectory (coinciding with the trace) but also do not hesitate to continue the latter to the rear, by thought, up to the particle source. [...] However, as we also noted, such a picture does not fit with quantum mechanics (nor, incidentally, with the Broglie-Bohm model, in which the corpuscles continually undergo deviations dictated by the whole-universe wave function). [...]

    The true explanation for the observed alignments is not, therefore, to be looked for within the realm of such ideas, great as may be the force with which our intuition puts them forward. It essentially lies in the fact that, when the initial conditions are sufficiently known quantum mechanics makes it possible to predict what will be observed. It does this, as we know, by introducing mathematical symbols that were given names (wave function, state vector, etc.) and many of them evoke some picture. But, to repeat, the pictures thus called up are unreliable ones and play no role in the calculations.

    What quantum mechanics in fact yields are merely the probabilities that, for a given initial flow, microblobs will be observed at such and such places within the device. And, as already noted, the probabilities of concerning the cases of the blobs being aligned along the general direction of motion are considerably larger than those relative to any other configuration. In other words, what quantum mechanics predicts is just that, within the device, we shall see alignments (of microblobs or bubbles) consistent with what we actually observe and naively interpret as being ‘traces’.”

  • N. Peter Armitage

    Most thinking hard core anti-reductionists wouldn’t say that macroscopic behavior cannot be derived from microscopic behavior, but instead that many details of microscopic behavior are IRRELEVANT to the long-distance long-time correlations. i.e. the long-distance correlations depend of organizing principles and aspects like symmetry and dimensionality. For instance, I can predict for you the low temperature functional form of the heat capacity of the a chunk of silicon, without knowing anything about the details of the crystal bonding. The only things that matter are the crystal structure and the speed of sound in the material.

  • Daniel

    Allen,

    Thanks for your thoughts, but to be honest it was a bit above my level of understanding.

    One thing to add:
    “What quantum mechanics in fact yields are merely the probabilities…”

    I’ve heard this many times, and it’s probably true. But I think it shows the limits of both our knowledge and technology, rather than any truth about the workings of sub-atomic particles. Sub-atomic particles cannot act probabistically in reality. Everything outcome must have a cause.

    An analogy would be tossing a coin. We can look at the probabilities of the coin coming up heads or tails as being 50/50 for any toss. But looking at any individual toss we would be guessing the outcome unless we have more information, such as the starting position of the coin, the direction and placement of the force upon the coin, the landing position of the coin, its weight, local gravity etc. If we had such information, and knew its effects, we could say for certain what the outcome of the toss would be. Probabilities only appear when dealing with multiple events, and while the probabilities can be clearly established, they don’t tell us anything about ‘why’ they are the way they are.

    As I said before, we may only ever be able to work with probabilities in quantum mechanics, but that doesn’t mean that’s all there is. There is no reason to believe every single event has not been determined from the beginning.

  • Matthew Saunders

    Sean,

    it will be good exercise for you :)

    Who knows if there are causeless causes (‘fundamental laws) or if, instead, reality acts more like John Wheeler’s participatory universe, where universe brings into being observers which bring into being universe which brings into being…in a never-ending self-reflexive loop.

  • Neal J. King

    Daniel (#49 & #52),

    You should study up on Bell’s theorem, which essentially rules out the perspective you are proposing; at least as an interpretation of quantum mechanics.

    To summarize the results: The logical implications of the view of reality you are taking imply a specific inequality for a particular quantum-correlation experiment. However, the straight-forward calculations of quantum mechanics violate that inequality.

    Tests of this inequality that have been done, to date, support quantum mechanics. Some people do not think that the experimental tests so far have completely nailed down the coffin lid on disproof of the inequality; but no one claims that QM can be consistent with it.

  • Braden B

    @47 Pete – I think the results of the paper should not be dismissed so easily, and are worthy of consideration in the discussion. The fact that, with full knowledge of the microscopic description of the system, one cannot compute certain macroscopic observables in the model would certainly suggest that knowledge of the microscopic laws is not enough – is that not contrary to the point of reductionism? Do we have different working definitions of reductionism in mind? I am under the impression that reductionism suggests that, in principle, if we know all the microscopic laws and initial state of the system we can compute any macroscopic observable at a later point in time. The paper seems to suggest that this is not the case, at least in the system considered. It may be true that the values of macroscopic observables we can observe are constrained by the microscopic laws (e.g., due to symmetries of the interactions, the ranges of interactions), but if we can’t in principle compute exactly these observables then I wouldn’t consider reductionism to have done its job.

    As for the fact that this is an infinite system, although our real physical universe is perhaps finite and discrete at a certain microscopic scale, our models of the universe typically are not. We typically assume an infinite volume universe or a continuum limit for our models, and the results of the paper may apply to such models. This would force us to get around this issue by building finite-volume, discrete models, but we must then assume that, like phase transitions, the result described in the paper only works for infinite or continuum systems. Perhaps we can in principle do this. So, maybe your objection stands. However, maybe someone can make a non-computability argument in a finite system, and then the infinite system limit isn’t a problem for the paper’s argument. I don’t know if the latter is impossible or not.

    Now, these are not arguments to claim that reductionism is necessarily false, only that serious consideration to the possibility that relying on only computing things from knowledge of the microscopic laws and initial conditions is not enough to provide a complete description of the macroscopic universe.

  • jim

    It seems people are confusing reductionism with scale. That is, macroscopic=f(microscopic) is only a specific kind of reduction, and is also not universal. Something macroscopic can indeed be fundamental. The simplest example that comes to mind is a structure made of building blocks. It’s shape is completely independent from the properties of the block.

  • Mike

    @ 54 Neal,

    Regarding Bell’s theorem see: Patrick Hayden & David Deutsch

    http://arxiv.org/abs/quant-ph/9906007

    “All information in quantum systems is, notwithstanding Bell’s theorem, localised. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through ‘classical’ (i.e. decoherent) information channels.”

    If my understanding is correct, Bell’s theorem assumes a single outcome. Under the MWI interpretation each outcome occurs. That makes all the difference.

  • Phil

    But can reductionism explain this story from NPR?

    http://www.npr.org/2011/04/22/135121360/a-boy-an-injury-a-recovery-a-miracle

  • Karl

    @ 58 Phil,

    Well, the boy says that “I was driving for a lay-in and then I got pushed from behind the back, and I hit my lip on the base of the basketball hoop.”

    Unless he means the stand or the pole holding up the basketball backboard and hoop, the first miracle was someone that short jumping that high.

  • Phil

    Perhaps. But that’s not the point of the story. Read the whole thing.

  • Karl

    @ 60 Phil,

    OK, I read the whole thing. What’s the point? God of the gaps? NPR’s reporting? The Church’s trial? I agree with the guy who said it’s a “joke” — though of course not for the family. Sorry, I’m not sure what point you’re trying to make here.

  • Phil

    @ 61, See my comment #58 for the point I’m trying to make.

  • Karl

    @ 62 Phil,

    Your point is a question? Can reductionism provide an explanation?

    If you mean can reductionism explain the boy’s recovery, then the answer is most certainly yes. Just because our understanding of the physical world doesn’t as yet provide an adequate answer (I’m assuming this, maybe it does even today, with enough investigation and piecing together of the admitted medical puzzle), that doesn’t mean answering it is impossible other than through positing a deity. In fact, I would suspect this type of “miracle” is ultimately more amendable to an easy explanation in the reductionist sense than many of the difficult issues raised on this particular thread.

    If you mean did the boy recover as the result of a miracle, then the answer is no. That is a perfect example of a bad explanation since it can be used to explain anything: why he recovered, why he died, why he’s still in a coma and the like: God did it.

    If you mean do some people believe the boy recovered as a result of miracle, then the answer is yes. People have a multitude of bad explanations for what happens around them. Over time those explanations have gotten progressively better, and (with some luck) they will continue to improve.

  • Phil

    “If you mean did the boy recover as the result of a miracle, then the answer is no.”
    How do you know this?

  • Karl

    How do I know that an ET didn’t do it? How do I know you didn’t do it? You’re taking the view of a “God of the gaps”: say something is uncertain and then attribute it to God. Over history that explanation has been used to support all manner of incorrect ideas. We didn’t know the nature of the Sun — so it must be a Sun God and on and on. These are simply bad, dead-end, explanations because they can be used to support any theory anybody wants to posit, and they don’t lead to progress in discovery real answers. What would convince you? If tomorrow a doctor emerged who presented a well-documented, solid, scientific account of what caused the boy to recover, would you then change your fundamental view? Or, would you just point to other things of which we are uncertain and attribute them to God? I suspect the later.

  • Phil

    How do you know God doesn’t exist? Reductionism?

  • Karl

    No, not reductionism alone, though as I said, over time this approach has led to more and more areas that were once the province of myth, spirits and God coming under the domain of science. However, what’s even more important is, as I have tried to explain, the nature of good and bad explanations.

  • Phil

    So you know God doesn’t exist because of reductionism and something else? What the something else? I thought reductionism is all you need to explain anything.

  • Karl

    I said, several times, what the “something else” is. In addition to a reductionist stance, you need a good grasp of good and bad explanations. Not those that can serve equally any crackpot assertions, like Ram, the God of the Sun, caused morning to rise each day upon the world (or allowed this remarkably good jumper to recover). You know you’re not resonding to anything I’ve said. Why is that? Anyway, I think everyone, including Sean, would like to see this particular discussion come to a close. Have a nice Easter my friend.

  • Count Iblis

    When arguing with fellow physicists about this, the best strategy is to invoke Uri Geller. Everyone will then agree that limits on new forces based on fundamental physics can be used to rule out paranormal effects (this was discussed on this blog some time ago). If one wants to allow for some room for people to not be strictly described by the fundamental laws of physics, then that room cannot exist at the single particle level, anomalous behavior is assumed to exist in the correlation functions of vast numbers of particles.

    Now people are especially motivated to propose such exceptions to the laws of physics because they don’t believe that when they think of something, the thought processes strictly follow the laws of physics. Then this leads quite naturally to loopholes allowing someone to read your mind, because we now cannot rule out that the collective behavior of large number of particles in someone’s brain can be influenced by what is happening in your brain, even though at the single particle level, no strange new forces are acting. It is just that there exist new, as of yet unknown laws, for the way some large numbers of particles interact and these will manifest themselves precisely in mental thought processes. :)

  • Phil

    @ 69, If a doctor came up with a natural explanation, and had evidence to back it up, that explained the boy’s recovery, then I will disbelieve the miracle explanation. Until that happens, I’ll keep my mind open, and so should you.

    Happy Easter to you too.

  • Charon

    Roughly speaking, the perfect conference would consist of about 10% talks and 90% coffee breaks; an explanation for why the ratio is reversed for almost every real conference is left as an exercise for the reader.

    I know this was a passing remark incidental to the point of this post, but I wanted to say students can benefit from the current structure. Undergrads, grad students, beginning postdocs – these people, unlike established faculty (like Sean), don’t have a bazillion people they know at the conference. They’re not going to be included in any of these productive side conversations. They’re going to be hanging out with anyone from their institution (if the conference is big enough that there are some others there). The talks are a place for them to hear other ideas.

    And sure, maybe they should be out networking. But the established people already have a full schedule talking with people they already know. So as nice as they may be, they’re not going to be looking for chatting up young students they don’t know (unless the student is really extraordinary, and has made a splash with some major publications).

  • Charon

    I’ll keep my mind open

    Not so open your brains fall out, I hope. Really, this “open mind” business is a really boring theme from people who use it simply to mean “I have no evidence or coherent theory to back up what I’m saying, but I would like to deflect any criticism you might make by accusing you of narrow thinking”.

    *sigh*

  • Phil

    Well, you might say that I DO have evidence, namely, the lack of any evidence of natural causes for this boy’s recovery. They prayed, and then the boy was healed, completely stumping the doctors. The explanation being a miracle sounds more credible, at this point in time, than an explanation from natural causes. Of course, if evidence is found of natural causes for his recovery, I will tend towards THAT explanation. Until then, I will tend towards the explanation that he was healed by a miracle. You want some sort of sign that God exists, well there you have it. What more do you people want? There’s more to reality than reductionism, but if you keep on with your “narrow thinking”, you’ll never see those instances of there being anything outside of the natural.

  • Matthew Saunders

    Phil (#74):

    Like the Bible says with fruits, scientists use the fruits that they have, instead of inventing new fruits. They compare what they experience with the fruits that they already have to figure things out.

    So one can look at that recovery and come up with hypotheses and if someone wants to call that ‘G_d’, then more power to them, but that isn’t an explanation, especially since the word itself can be used to mean anything.

    So scientists limit themselves. And try to be very specific in that “in this experiment, at this time, at this location, under these conditions, we found such and such to apparently be the case…”

    And yes, they are subject to tribalisms, anything human is. Hopefully, they will follow the evidence wherever it leads, no matter how uncomfortable it makes them.

    People have to love to learn to live with being limited…or go crazy denying it :)

  • Doug

    If at a higher scale new physics occurred, it is hard to avoid concluding that all the differential equations you liked at the lower scale are now overdetermined. If biology has new rules beyond lots of Schrodinger equation solutions, this ruins the ability to solve the Schrodinger equation the way standard math rules would like to let you. That’s my main reason for supporting reductionism – emergence of unique laws is just not compatible.

  • Matthew Saunders

    Doug (#76):

    Just to clarify, are you saying that something like my appreciation of a certain kind of music should be able to be mapped out using the Schrodinger equation, or are you saying something else?

  • spyder

    and voila, the threads starts to fall apart; reductio ad absurdum

  • TimG

    If every time something happened and we didn’t know the reason we said “Well, it’s a miracle”, then science would never have gotten anywhere.

    This has gone way off topic from reductionism, though.

  • Daniel

    Sean,
    If you’re wrong (and I’m certain you are), a lot of your thoughts is about something that can’t exist. Therefore it’s important for you to figure out that you’re wrong (or right if you prefer to believe that :) ).

  • tasos

    It is rather amusing that League-Two scientists, like astro-cosmologists, still discuss reductionism vividly.

    Most Leauge-One scientists, i.e. string theorists, have already settled the issue in favour of holism by virtue of Holography. Every time you scratch deeper into an explanation, you end up with a more complicated and rich structure (much to the contrary of the “dream of a final theory” which advocated the explanation of nature by some “elementary particle/string or other simple theory”). String theory, M-theory and further still to be discovered more complicated structures are *necessary* to explain such different phenomena from superconductivity, chaos and confinement. Moreover, the Landscape put the final nail in the coffin of predictability. Hence, we are more and more confident that we cannot explain physical reality even in principle.

    So, even if “in principle” we cannot disprove reductionism (but we cannot prove it as well), there is currently a strong intuition to think more in terms of a holistic approach. People who still debate reductionism are not current research frontliners. Those who are in negation, are relegated to League-Two and become astro-cosmologists.

  • Karl

    81. tasos,

    It’s rather amusing that that someone who probably fancies him or her self as a League-One Scientist has such a hard time spelling League. :)

  • Pete

    This has devolved really quickly, and tasos what you just wrote down sounds like a bunch of malakias (im assuming you’re greek, if not, that means bullshit). Braden,getting back to your point, you definitely are correct that we use infinite limits and other such things in physics, but this does not mean that we automatically have to support a non-reductionist line of thinking. The fact is that some things are uncomputable or unprovable in certain formal systems, but this does not mean in any way shape or form that the dynamics of the system is not fully determined by lesser parts, it simply means that we need to further add axioms or attempt to prove those other propositions in metalanguages that are more expressive if you will. This goes back to Godel; just because some things are not provable in an axiomatic system does not mean that there are consistent mathematical truths that are beyond our reach, it means we simply have to find the right axioms to further express those truths. In this way we get closer and closer to absolute truth. For an infinitely rich system (which mathematics is) there are likely an unbounded number of axioms, but to relate it back to reductionism, this only occurs in an system or universe with infinite information content. Our universe has finite information, and thus can be expressible with simple equations. You will find a neat example here: http://www.math.com/students/wonders/life/life.html

  • Daniel

    Pete,
    That’s a neat example, I agree. It shows that we live in a real world and that we can make theories. The game of life is not a real world.

  • Baby Bones

    I’m wondering if there is a way to split the difference between reductionism and the other what-have-you points of view. For instance, what if there were a mechanism at work in the universe that fades out “forces” over time? This is in analogy (or maybe more than an analogy) to the idea that we need ever higher energies to probe the energies at which the forces are unified.

    What I mean is say there were lots more quantum-like forces in the early universe, much more than just the electoweak and strong forces. These rapidly became unmeasurable via an uncertainty principle based on frequency, except for the forces that we can measure today. That is, these ur-forces went dark after only a short time and the only thing that remains of them is their gravity (gravity being the nonquantum residue of these faded forces; goodbye graviton).

    Detection of their faded quanta would be extremely infrequent so as to lie below a Planck Frequency that determines the difference between what can be statistically meaningful and what cannot be statistically meaningful. That is, were they ever to be detected now or in the future it would be in principle impossible to separate the event from detector noise.

    This faded forces theory would explain gravity in a new way. Gravity would not be quantum and have some aspects of an effective field theory. Dark matter would be gravitationally observable but not associated with a particle. And although you can make a reductionist statement about the beginning of the universe, since forces fade out, what remains forms essentially a new universe with fewer rules than the previous one, and fewer rules working together would be like new rules emerging.

  • Aleksandar Mikovic

    Goedel’s theorems in logic prove that reductionism is impossible even in principle.

  • Ric

    I’m not sure if this thread has already fallen apart, but I thought I’d add my 2 cents. Here’s the problem I have with complete reductionism.

    On most weekdays, the particles that make up me bounce back and forth between the Ravenswood and Hyde Park neighborhoods of Chicago. Why do my particles do this? My answer would be that I live in Ravenswood and I work in Hyde Park: So this happens because of my job. However, to the reductionist, my “job” seems not exist. It’s not part of the physical stuff of the universe. The building where I do my job is real, but my job is merely a set of immaterial tasks–not ontologically real. So it can’t be the real reason for my particles to bounce between the two neighborhoods. There must be a physical reason for it.

    If that’s true, it renders my job, and by extension the industry in which I work, and indeed the entire world economy a mere illusion. Each must be constructed post-hoc, mere reflections of the physical workings of the universe. Any causal powers they may seem to have are illusions. This scenario is troubling to me, and doesn’t pass the parsimony test. Can someone from the reductionist camp show me where I’ve gone wrong?

    (Having just re-read this, I will accept “You’re crazy” as an answer.)

  • Mike

    Not sure where you’ve gone, wrong or otherwise :)

    But I agree completely with your second paragraph, and that was exactly the point Deutsch was making in my quote above @9. Emergent explanations are often more important than reductionist ones, but reality ultimately consists of both — as long as they’re good explanations. See @16.

  • Matthew Saunders

    Yes, I don’t think this thread is ‘falling apart’ — curiosity is part of being a genuine, alive, human being. To have a long life, one has to learn how to live with embarassment and to avoid the silly little tribal pulls that can cause one to love something not because it is true but because your fellow tribe members practice it/believe in it. Life is a riff.

    The game of life demonstrates how simple rules can result in other rules that weren’t in the original rules. How one thinks of it depends, I think, on how one thinks of the rules; are they descriptors of the behaviors observed or are they, in a very real sense, actual inherent rules that limit behavior.

    Maybe, they are both ;)

    Baby Jones,

    I think that Niels Bohr’s Principle of Complementarity can apply to ways of knowing as well; if you get analytic, you can’t pay attention to the synthetic, and vice-versa. But both are complementary and needed to get the ‘whole picture’.

    Ric,

    good job :)

    I think part of the ‘issue’ is that, and I think this is a left-over from supernatural theism, that people in Western culture grow up thinking that these laws/habits are more ‘real’/important if they exist independently of us than if they are, at least a bit, created by us (like the various ways we have of harnessing the economy or justice).

    I think also a big problem is just translation — when these scientists do their discoveries, they do it (hopefully) in a precise epistemology (eg. we observed this at such and such a time under such and such conditions in this location), but when they then talk about it to the public, they have to then use a language that isn’t quite up to snuff for discussion of the experiment — a lot of the precise nuance is lost and we get things like ‘the G_d particle’ being discussed.

    Or so it is perhaps.

  • http://wavefunction.fieldofscience.com Curious Wavefunction

    In his new book, David Deutsch says that reductionism does not work even in principle since the laws of thermodynamics, and more specifically entropy and the irreversible arrow of time, do not follow from the microscopic laws of physics which are reversible.

  • Mike

    Curious Wavefunction,

    I read that too and I think your take is basically right, but I think Deutsch’s view is a bit more nuanced.

    As he said in the Fabric of Reality: “The fabric of reality does not consist only of reductionist ingredients like space, time and subatomic particles, but also of life, thought, computation and the other things to which those explanations refer.”

    Also, as I quoted previously in this thread, he said “In the reductionist world-view, the laws governing subatomic particle interactions are of paramount importance, as they are the base of the hierarchy of all knowledge. But in the real structure of scientific knowledge, and in the structure of our knowledge generally, such laws have a much more humble role.”

    So, in his view they have “role” (though often a very humble one). The examples you quote point out how humble that can be at times.

  • Redknapp

    Scientific theory gives me the decision of what I’m going to do in the next. The laws governing subatomic particle interactions are of paramount importance, as they are the base of the hierarchy of all knowledge. But in the real world such laws have a bad sense among us.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »