Physicalist Anti-Reductionism

By Sean Carroll | November 3, 2010 1:44 pm

In a philosophical mood at the moment, because I’m about to head to Montreal for the Philosophy of Science Association biennial meeting. Say hi if you’re in the neighborhood! I’m on a panel Thursday morning with Nick Huggett, Chris Wüthrich, and Tim Maudlin, talking about the emergence of spacetime in quantum gravity. My angle: space is obviously not fundamental, though time might be.

Here’s a Philosophy TV dialogue between John Dupré (left) and Alex Rosenberg (right). They are both physicalists — the believe that the world is described by material things (or fermions and bosons, if you want to be more specific) and nothing else. But Dupré is an anti-reductionist, which is apparently the majority view among philosophers these days. Rosenberg holds out for reductionism, and seems to me to do a pretty good job at it.

John and Alex from Philosophy TV on Vimeo.

To be honest, even though this was an interesting conversation and I can’t help but be drawn into very similar discussions, I always come away thinking this is the most boring argument in all of philosophy of science. Try as I may, I can’t come up with a non-straw-man version of what it is the anti-reductionists are actually objecting to. You could object to the claim that “the best way to understand complex systems is to analyze their component parts, ignoring higher-level structures” but only if you can find someone who actually makes that claim. You can learn something about a biological organism by studying its genome, but nobody sensible thinks that’s the only way to study it, and nobody thinks that the right approach is to break a giraffe down to quarks and leptons and start cranking out the Feynman diagrams. (If such people can be identified, I’d happily join in the condemnations.)

A sensible reductionist perspective would be something like “objects are completely defined by the states of their components.” The dialogue uses elephants as examples of complex objects, so Rosenberg imagines that we know the state (position and momentum etc.) of every single particle in an elephant. Now we consider another collection of particles, far away, in exactly the same state as the ones in the elephant. Is there any sense in which that new collection is not precisely the same kind of elephant as the original?

Dupré doesn’t give a very convincing answer, except to suggest that you would also need to know the conditions of the environment in which the elephant found itself, to know how it would react. That’s fine, just give the states of all the particles making up the environment. I’m not sure why this is really an objection.

This is purely a philosophical stance, of course; it means next to nothing for practical questions. Nor does the word “fundamental” act in this context as a synonym for
“important” or “interesting.” If I want to describe an elephant, the last thing I would imagine doing is listing the positions and momenta of all its atoms. But it’s worth getting the philosophy right. I could imagine hypothetical worlds in which reductionism failed — worlds where different substances were simply different, rather than being different combinations of the same underlying particles. It’s just not our world.

CATEGORIZED UNDER: Philosophy
  • heldervelez

    in the draft of a paper to be:
    “…Space expansion model privileges the atomic units, the only ones where physical laws are known to hold, but a fundamental question have not been answered yet:
    how can we distinguish between a Space expansion and a Matter contraction, once both can only appear to us as a Space expansion? ”
    (to be answered later this year, by a friend. The answer has a physical meaning, not a philosophical one).
    Time does not exist by itself, it is relative to matter «size». Uau,… is time relative? and matter size is not absolute?

  • Amos Zeeberg (Discover Web Editor)

    Sean, isn’t there some randomness introduced by quantum mechanics, based on Heisenberg uncertainty? So you can never really have “another collection of particles, far away, in exactly the same state as the ones in the elephant,” right?

    And suppose that the tiny quantum randomness could be exaggerated by the complexity of the system of particles in the elephant, so that the behavior of the entire elephant system would not be predictable based on what you know about its initial conditions? Basically, quantum mechanics creates a tiny sliver of unpredictability, and chaos magnifies that tiny difference into a large difference at bigger scales and later times. If two elephants start out exactly the same, a quantum flip of spin in one electron could lead to one elephant dying of a heart attack, while the other lives a long, happy life.

    If that’s the case, then maybe some higher-level analysis might be able to provide more accurate predictions than the reductionist approach.

  • http://lablemminglounge.blogspot.com Lab Lemming

    Why is getting philosophy right important?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Amos, to get it right you should really replace “positions and momenta of all the particles” by “the quantum state of all the particles.” That’s a very precisely-defined thing. Uncertainly only comes into the game when you are trying to measure some observable that doesn’t have a definite value in the quantum state. All very interesting and crucial, but not directly important for this reductionist/antireductionist debate, I think. If you got the quantum states right, the elephants would be indistinguishable.

  • Brian Too

    Fools rush in. Be warned that I don’t know what I’m talking about.

    However I think part of the objection to reductionalist thinking is that some areas of study are non-deterministic, and the behaviours of groups are one such area.

    For instance, I’ve heard objections to studies that incorporate Monte Carlo simulations. The objection was something to the effect of, you had to run 1000 simulations to get statistically significant data. Yet how do you know that 1000 runs are enough? How do you prove that more simulations are not better? Many of these studies use empirical measures and findings, so they find that actually 100 are enough, but then they perform a whole bunch more just to “make sure”. However the measures are not fundamental and the study itself cannot prove that it was “enough”, or even too much for that matter.

    Another thing. There is a well-known hierarchy of sciences, from most fundamental (physics) to chemistry, to biology, etc., with some thinking that the social sciences are the most “systems” based and complex to study. There is some sort of linkage to the scale of the phenomena being studied too although I don’t want to make too much of that.

    Well, I read E.O. Wilson’s Consilience. It contained a powerful criticism of the social sciences as being self-referential and not sufficiently “scientific” enough. This is pretty strong stuff. Some would take away a message that the social sciences are not scientific at all (I don’t think this was Wilson’s intent). Nor do I think that the attitude that social sciences are second-class is rare. The term soft sciences can be an indictment however subtle.

    Perhaps some of those opposed to reductionalist thinking, are reacting in some way against criticisms of the social sciences? I think that some or most anti-reductionalist thinkers think that the social sciences have gotten a bad rap.

  • Physicalist

    Your “sensible reductionist perspective” is typically what is meant by “physicalism”: everything supervenes on the physical.

    The anti-reductionism comes in many flavors, but usually what is meant is (at least) a rejection of the old logical positivist picture that higher level theories (e.g. biology) can be derived from physics.

  • Kevin

    Re: #6:
    But that view, regardless of its philosophical legitimacy, simply isn’t factually supported. Look at the computer simulations that accurately predict protein folding structure from numerical evaluations and approximations of physics equations and principles. It’s become increasingly clear over the past few decades that any system can be derived from physics, if you have a big enough computer.

  • Charon

    @Brian: “Yet how do you know that 1000 runs are enough?”

    The same way scientists doing calculations always figure out how much is enough. Convergence. (Comparison to observation is another, which you mention.) This is why cosmological numerical simulation papers, for example, always talk about a few runs they did with higher resolution, or a larger box size. If the results change a lot, then you don’t have enough resolution/volume, and they can’t be trusted. If they converge, then you’re doing okay. (The model could still be wrong, but that’s a different question.) This isn’t new, and has nothing to do with Monte Carlo in particular. You have to do the same thing when deciding how far out to Taylor expand something, where to truncate any asymptotic approximation, how many powers of the fine structure constant to use in your QED calculation…

    All our calculations are model-dependent. There’s nothing special about Monte Carlo, except that it’s a much easier way of dealing with many complex situations/probability density functions. If you’re worried about technical things like errors in the coverage of your confidence intervals, there has been plenty of work by statisticians to figure out how that scales with N, so yes, you can tell how much is enough.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    My “sensible reductionist perspective” is a lot stronger than “physicalism.” I said “objects are completely defined by the states of their components,” not “objects are completely defined by their physical states.”

    It seems clear, under reasonable construals of “completely defined,” that if objects are completely defined by the states of their components, then any accurate higher-level picture would be completely dependent on what was happening with the components. That doesn’t mean that you can “derive” every interesting higher-level theory in practice, but it means that every higher-level theory is simply a useful repackaging of what’s going on at the lower level.

  • TimG

    I’m reminded of P. W. Anderson’s article “More is Different”. He takes as a given the “reductionist hypothesis”, which Anderson defines in the article as the idea that the workings of all things large and small are controlled by the same fundamental laws of physics. But Anderson argues against the “constructionist hypothesis”, which he defines as the idea that one could start from the fundamental laws and reconstruct the universe. In particular, Anderson emphasizes that larger scale structures may not obey the symmetries that occur in the fundamental laws, because of spontaneously broken symmetry. It’s a great article that I can’t do justice to in this summary; for anyone who hasn’t read it, it’s definitely worth looking up.

    Although Anderson characterizes his position as “reductionist” but not “constructionist”, it sounds a bit like the sort of “anti-reductionist physicalism” described by commenter #6 above.

  • TimG

    Sean, with regard to “objects are completely defined by the states of their components”, what about entanglement? In some cases you *can’t* separate the state of a multi-particle system into the states of its components.

    Although maybe I’m nitpicking, since we aren’t exactly running into macroscopic entangled states in our daily lives (hypothetical half-dead cats notwithstanding.)

  • Physicalist

    @ Sean (#9), you did specify components, but in these debates it is usually implicit that the physical states in questions in question are “micro-physical” states, which means that one is speaking of component “particles” (since the debates almost never address the field theory). So I took your claim to be essentially the same as a commitment to supervenience on the micro-physical.

    Although I should admit that I don’t really understand what you mean when you say that “objects are completely defined.” I take it your not thinking of a conceptual (or linguistic) definition here. I presumed that you were speaking of the state of some complex macroscopic system, and requiring that such macroscopic states be fixed by the states of the microphysical states of the components (which claim I would agree with), but perhaps I misunderstand you.

    However, you and I know (but most philosophers involved in these reduction debates don’t know) that this supervenience of composite states on the states of the components fails in the context of quantum mechanics (i.e., when we have entangled states). So interestingly, this sort of physicalism doesn’t strictly hold, but this fact is largely irrelevant for higher levels like biology and psychology (because it the states of these systems are determined by the states of their components).

    Most of the debate over reductionism comes in when we try to make clearer what counts as a “useful repackaging” and what counts as something genuinely novel. I tend to be on your side here, but I can see the force of the claim that oftentimes it is precisely the useful repackaging that’s doing the real explanatory work — and for this reason we should reject claims of explanatory reduction (i.e., that all explanations could in principle be eliminated in favor of physical explanations).

    @ Kevin (#7): No one denies that we can sometimes derive somethings (at least in principle) from the underlying physics. The question is whether everything can be derived (or explained). Here’s a standard example to give you a sense of the worry (Perhaps Dupre gives it in the video — I haven’t had time to watch it — which is why I tried to keep my earlier comment brief — he does discuss it elsewhere):

    Suppose that some particular rabbit gets eaten by some particular fox. Someone studying the populations of these organisms might explain this by pointing to the fact that the fox population is particularly high, and this makes it likely that any given rabbit will get eaten. A micro-physical account would be able to predict that that particular rabbit would be eaten by that particular fox. However, the population account tells us that even if the rabbit survived that particular encounter, it would be unlikely to continue to survive for long. The micro-physical account by itself doesn’t tell us that. Indeed, the micro-physics would be exactly the same if the fox population were low, but this one rabbit just happened to get very unlikely.

    The upshot is that it’s often very difficult (Dupre would say impossible) to say exactly what needs to derived, or to give an account of how such derivations would go. Indeed, a careful look at the case of reducing thermodynamics to statistical mechanics (which is usually taken to be the paradigm case of a successful reduction) highlights to sort of difficulties involved.

    I should probably mention that I’m more on Rosenberg’s side than Dupré’s in this debate; I’m a much stronger defender of (some forms of) reduction than most other philosophers. But even I don’t think we get full-blown derivability or explanatory reduction.

  • http://sites.google.com/site/russabbott Russ Abbott

    Neither elephants nor any other macro “objects are completely defined by the states of their components.” The components that “define” an elephant (if that concept even makes sense) change from moment to moment. Yet the elephant’s overall structure and behavior remain relatively constant.

    This is not to imply that there is some mysterious elephantness force that keeps an elephant together. But it is to say that to provide a reasonable scientific explanation/description of how elephants behave one must talk about more than an elephant’s components.

    You seem to be granting that. But in granting that you are admitting into your ontology higher level entities. The higher level entities are the entities that the explanations/descriptions of elephants refer to. Doing so is a rejection of pure reductionism. In other words, you are not a reductionist. That’s fine. But I wish you would acknowledge that instead of complaining about it.

    Here are some more examples. How would a reductionist explain/describe evolution? How would a reductionist explain/describe the election we held yesterday? How would a reductionist explain/describe our current economic situation?

    It’s not just a matter of saying it could be done but it’s too complex. The fact is that there are no concepts at the level of elementary particles that can be used in those explanations/descriptions.

    To talk about these phenomena in any meaningful way one must talk about entities whose behavior is best explained/described in terms of them as primitives rather than in terms of the behavior of their components. The components of a dollar bill really have nothing to do with how people treat it–although they have everything to do with how it deteriorates over time, i.e., how its physical environment treats it. These are very different things. Why do you find it so hard to acknowledge that?

  • AI

    Sean why do you think that “space is obviously not fundamental”?

    @7:
    I don’t know where you got the idea that computers accurately simulate protein folding but it is completely false. The most sophisticated modeling programs struggle with even simple proteins and the results are very crude and unreliable.

    Furthermore those programs are mostly based on empirical measurements and huge libraries of already empirically determined protein structures, actual physics plays relatively minor role so they certainly fail as examples of “biology derived from physics.”

    Now, I am not saying that it is impossible in principle, only that it hasn’t been done so far and won’t be done in the near future.

  • Moshe

    Sean, I am curious if philosophers have considered dualities, bootstrap and similar ideas. If you have a dual pair of theories, the role is what is fundamental and what is composite switches between descriptions, and generically no one description is better than another. Seems to me the best way to sidestep this somewhat tedious issue.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Al, just read Moshe’s comment — that’s most of the point of my talk. I think that some philosophers have thought along those lines, but it’s not common. A good number of them believe strongly that space is fundamental.

  • Moshe

    I had in mind something even simpler: QFT in flat spacetime, where you can have solitons and fundamental quanta which presumably “make up” those solitons. But, which object is fundamental and which composite depends on the description. Different descriptions are more convenient in different situations, but none of them is more correct than the other. Emergence of space is related, but not precisely the same thing.

    (To get an intuitive picture of this, one has to first realize that the fundamental object of QFT is a quantum field, and point-like particles are a derived object, which is not always all that useful. But, this is a conversation for another time.)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Sure, there are some great examples of soliton/particle dualities, which illustrate the basic point nicely. So I’ll begin my talk with the statement that “What is or is not fundamental is not fundamental.” Honestly I’m not sure what is fundamental, outside of maybe the Schrodinger equation (and there are plenty of equivalent formulations for that).

    But the most direct example is something like AdS/CFT, which makes the “space is not fundamental” point about as directly as you can imagine.

  • Moshe

    Sounds like an excellent point to make, and not that easy to establish for people who are not used to it. Good luck!

  • Earl Campbell

    In agreeement with TimG, I also think that the statement “objects are completely defined by the states of their components.” does not account for quantum mechanical phenomena such as entanglement!

    I think that quantum mechanics is evidence that good science does not have to be single mindedly reductionist in its approach, and I have ocassionally wondered how one might go looking for new high level phenomena occruing in large collections of particles.

  • Louis

    Sean said:

    “I think that some philosophers have thought along those lines, but it’s not common. A good number of them believe strongly that space is fundamental.”

    The idea that space is not fundamental is common in Indian Buddhist philosophy. I’m not going to inventory the opinions of all schools of Indian philosophy but what comes to mind, off the top of my head…

    The first systematic form of Buddhist philosophy is called Abhidharma. It came into existence in India sometime BCE but the exact chronology is difficult to establish. The Abhidharmists agreed on general lines of method but disagreed on details. One major group, the Sarvāstivādins posited two theoretical entities which relate to what we normally talk about as “space”:

    – space (ākāśa)

    – the space-element (ākāśadhātu)

    They held that things like table, people, houses, cows are all composed off obstructive atoms of matter. The space-element was their explanation for any opening, expanse, empty region between the solid material things. So in a room, the walls, roof, floor are all made of obstructive atoms. However, the middle of the room, the space is composed of the space-element which is also atomic but non-obstructive. (By the way, I’m not aware of any discussion of space-element being specifically a gas.)

    Now, space (ākāśa) is not the same as the space-element (ākāśadhātu). The space-element is matter so it is displaced by other matter but space is immaterial. The Sarvāstivādins hold that space pervades all entities which enter into any spatial relationship. It is the necessary element which allows for any kind of spatial relationship to occur. It is equivalent to the idea of a container space in which things happen.

    Now, two groups of Abhidharmists reacted to the Sarvāstivādins: the Dārṣṭāntikas (which existed by at least the 2nd cent CE, probably earlier) and the Sautrāntikas (their major comprehensive work composed in the 5th century CE). Both held that space (ākāśa) is not an actual element of reality but just a way of speaking. They did not deny the space-element (ākāśadhātu), which is matter which can be obstructed by other matter but does not itself obstruct other matter. However, space, which is wholly immaterial, does not obstruct and is not obstructed, has no reality for them. So they denied the idea of a container space.

    To briefly talk of other groups, the Madhyamakas also deny that space is a fundamental element of reality. Same for the Yogācārins, who based some of their ontology on the Sautrāntikas. Overall, the idea that space is not fundamental is common in Indian Buddhist philosophy.

    (And time is fundamental to only a few Buddhist philosophers. The majority opinion is that it is not fundamental. It does not appear as an element of reality for any of the groups mentioned above.)

  • galen

    As just a little aside, imagine an infinite collection of identical pairs of socks, say P[1], P[2], P[3] etc. Bertrand Russell is famous for, among other things, pointing out that without the Axiom of Choice there is no way one can select exactly one sock from each pair; i.e. there is no function F on the collection of pairs so that for every n, F(P[n]) is an element of P[n].

    Now imagine a world whose micro-states are grouped into disjoint macro-states P[1], P[2], P[3] etc. Suppose one of the laws of the macro-world is: if the world is in macro-state P[n] it will proceed to macro-state P[n+1]. Reductionism requires this law should emerge from a deeper law; i.e. there should be a transition function T on the micro-states such that if the world is currently in a micro-state S that’s an element of P[n], then it will proceed to the micro-state T(S) which is in P[n+1]. If there were such a transition function from which the macro-law emerges, then one could use it to recursively define a choice function F on the family of macro-states; i.e. let F(P[1]) be any member of P[1], and let F(P[n+1]) = T(F(P[n])). In other words: the Axiom of Choice (at least this version of it) is a consequence of reductionism!

    Without the Axiom of Choice reductionism might fail.

  • http://backreaction.blogspot.com/ Bee

    “space is obviously not fundamental”

    Pls explain “obviously”

    Regarding anti-reductionism, you might find this interesting:

    Infinity really is different, rspt the paper it is about: More really is different

  • felix

    Try as I may, I can’t come up with a non-straw-man version of what it is the anti-reductionists are actually objecting to

    Phil Anderson, one of the most prominent anti-reductionists, had something very concrete to object to. He testified before the Congress against funding the SSC.

  • Eugene

    wow, i don’t think i will ever be able to think like a philosopher.

  • Spatial

    “space is obviously fundamental”

    Please tell me what space is made of then. I like to make some.

  • Ben

    Kevin said:

    It’s become increasingly clear over the past few decades that any system can be derived from physics, if you have a big enough computer.

    This is the position that can legitimately be opposed by anti-reductionism, I think, and also gives physicists who espouse it a bad reputation. First, the “if” is an unrealistic if. And there are some problems where the complexity or speed of the algorithm means the “big enough” computer is of extraterrestrial dimensions.

    Second, biology, chemistry and physics are systems of knowledge. They happen inside people’s minds. The biological, chemical, physical phenomena are real, but the organizing principles we use to describe them are mental. In that sense, you don’t derive biology (as a system of knowledge that abstracts biological phenomena) from physics (another system of knowledge, which is compatible with but does not subsume biology). You can use a simulation of physical phenomena to predict biochemical phenomena, but you aren’t reducing laws of biology to laws of physics.

    After all, even in physics, running a giant simulation doesn’t necessarily yield useful organizing principles; you need some way of abstracting the output into a simpler general principle.

  • Charon

    Read Steven Weinberg’s Reductionism Redux, in which he distinguishes between “grand” and “petty” reductionism. Various people here (Russ Abbott, Physicalist, Ben, etc.) might benefit from this. I posted a comment yesterday that quoted Weinberg’s definition and gave a link to the essay, but apparently The Machine ate it, thinking it was yummy spam or something.

  • Boaz

    I enjoyed watching this debate and reading this post a lot!

    Regarding the reducibility of the elephant to its components, I think the response that it depends on the environment is a valid response. Its more clear in the case of the protein folding- one can’t answer the question by just discussing the components, one needs the environment also.

    As Dupre says around 34:40, he thinks this is the heart of the argument. Some systems are well understood with reductionism, and some are not. In particular, those that don’t depend much on their environment, can be usefully analyzed in terms of their components.

    The other interesting point for me is about the difference between physics and physicalism. And the attempt of physics to call all explanations somehow a part of the science of “physics” is a kind of imperialism (43:43). You usually have to change the question a bit before its posed in the form of a physics question. When Sean says the reductionist statement is “objects are completely defined by the states of their components.” this says that we can basically only talk things that are internal to a given thing. Dupre is saying that when we talk about stuff- the concepts we apply to them are often relational qualities. An elephant may be called “friendly” for example, and that may depend on the other elephants around.

  • Ben

    Reductionism Redux was somewhat familiar, I may have read it when it came out, back in grad school. In any case, I often disagree with Weinberg’s philosophy of science (IIRC, Weinberg does not even like Thomas Kuhn’s work), but in this case I don’t think it disagrees with what I wrote. Here Weinberg, from http://www.nybooks.com/articles/archives/1995/oct/05/reductionism-redux/ :

    Of course, everything is ultimately quantum-mechanical; the question is whether quantum mechanics will appear directly in the theory of the mind, and not just in the deeper-level theories like chemistry on which the theory of the mind will be based. Edelman and Penrose might be right about this, but I doubt it. It is precisely those systems that can be approximately described by pre-quantum classical mechanics that are so sensitive to initial conditions that, for practical purposes, they are unpredictable.

    One of the implications of this point is not only that biology-as-theory is not necessarily reducible to physics-as-theory, but that maybe you couldn’t ever run a giant physics simulation of something as complex as a brain and hope to learn anything predictable, because of the sensitivity to initial conditions.

  • http://tsm2.blogspot.com wolfgang

    Sean,

    you write
    >A sensible reductionist perspective would be something like “objects are completely
    >defined by the states of their components.”
    but this makes no sense already for an atom.
    There is no quantum state of each component, only a state of the whole thing.

    As for the elephant, remember the no-cloning theorem.

  • Pingback: Weekend Links | UIUC Interfaith Atheists,                                    Agnostics and Humanists()

  • uhmmm

    If Sean had an MI while arguing with a philosopher and wound up with an artificial heart, would he still be Sean, and would he still be considered a human being?

    If we start replacing bits and pieces of him with prostheses and implants, at what point does he cease being Sean? Where is the Sean nature? In the fingers he types with? In the larynx he uses to speak? In his brain?

    At what point does bionic-Sean stop being a human being? Suppose we give him artificial kidneys and a synthetic liver to go with his new heart. Is he still human? Should he still be treated as human under the law? What would *he* say?

    What if we replace everything *but* his brain? Still Sean? Still human?

    Does it matter when Sean has bits and pieces of him replaced? Is Sean’s-brain-in-plastic as human as a foetal brain transplanted into an artificial (and perhaps very non-bipedal-looking) body and grown into adulthood a human?

    What about the elephant? Is a bionic elephant still an elephant? What does adult-elephant-in-artificial-body think about things? Would it find natural elephants attractive? What would a very young elephant brain raised in a non-elephantine body think about things? Would *it* find natural elephants attractive? (Cats raised by humans totally isolated from all knowledge of other cats still try to mate with other cats; likewise, cats and dogs raised in environments where they encounter all manner of other animals regularly sometimes try out a bit of forbidden dog-on-cat love. See youtube…)

    We could go further and start augmenting brains, or perhaps even completely replace them: record all the memories of a person and “restore” them on compatible hardware — another brain, or a brain with electronic enhancements, or perhaps something entirely artificial. When you woke up after the procedure would you think you were less you than if it was “merely” a heart replacement surgery?

    Reductionism is a useful tool in understanding how elephants and humans lay down their memories, and in understanding how to replace the various organs that feed those processes in the brain until they can be sufficiently replicated in some other medium. “What is a memory?” and “What is a personality?” are questions answerable in Sean’s “sensible reductionist perspective”.

    WE JUST HAVEN’T ANSWERED THOSE QUESTIONS YET.

    The lack of complete answers is not a reasonable condemnation of a reductionist approach. You would have to argue that those answers cannot — even in principle — be found in examining the components of the brain and its environment (the organism and things outside that).

    “Sensible repackaging” — abstraction — is convenient both when the underlying mechanics are extremely tedious to work with and when there is actual theory choice because the underlying mechanics are not fully known. Indeed, an abstraction that is in very very close agreement with observation in some useful limit is also a powerful tool for verifying underlying theories. If your “micro-physical” theory cannot in principle reproduce “molecule”, “organelle”, “eukaryotic cell”, “organ”, “elephant”, and “herd of elephants” then it’s wrong. However, it’s reasonable that you can’t recover all of that *yet*.

    Sean wrote: “nobody thinks that the right approach is to break a giraffe down to quarks and leptons and start cranking out the Feynman diagrams. (If such people can be identified, I’d happily join in the condemnations.)”

    Is that a permanent objection? If at some point the ability to compute the evolution of a system of quarks and leptons of an object the size of an elephant and its immediate environment arises wouldn’t it be a useful tool for predicting the future actions of that particular elephant?

    Obviously we can’t do this now and almost certainly won’t be able to in the next couple of years.

    How about something that’s the scale of a prion or a small virus? Or a prokaryotic cell? Or two neurons? Do you condemn anyone looking below the level of molecules and atoms for objects of those scales?

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »