How Emergent is the Brain?

By Neuroskeptic | February 2, 2019 6:39 am

A new paper offers a broad challenge to a certain kind of ‘grand theory’ about the brain. According to the authors, Federico E. Turkheimer and colleagues, it is problematic to build models of brain function that rely on ‘strong emergence’.

dreams

Two popular theories, the Free Energy Principle aka Bayesian Brain and the Integrated Information Theory model, are singled out as examples of strong emergence-based work.

Emergence refers to the idea that a system can exhibit behavior or properties that none of its individual parts possess. Such a behavior or property ’emerges’ from the whole system, in other words, from the parts and their interactions. To give an example, a single person could run around and kick a ball, but a single person could never play a game of soccer – soccer emerges from a group of people.

It seems very plausible that the brain is an emergent system – that complex functions emerge from the interactions between lots of neurons. But Turkheimer et al. say that we need to distinguish between two different kinds of emergence, strong and weak:

A system is said to exhibit strong emergence when its behaviour, or the consequence of its behaviour, exceeds the limits of its constituent parts. Thus the resulting behavioural properties of the system are caused by the interaction of the different layers of that system, but they cannot be derived simply by analysing the rules and individual parts that make up the system.

Weak emergence on the other hand, differs in the sense that whilst the emergent behaviour of the system is the product of interactions between its various layers, that behaviour is entirely encapsulated by the confines of the system itself, and as such, can be fully explained simply though an analysis of interactions between its elemental units.

I have to say that I don’t quite follow this distinction. Does anyone really believe that the brain is such a strongly emergent system that we could never, even in principle, ‘explain it though an analysis of interactions between its elemental units’? Apart from the units and their interactions, what else is there – unless we invoke dualism?

I think that what Turkheimer et al. really mean by ‘strong emergence’ is a theory which posits strong ‘top-down’ influences in the brain, such that we can’t understand the ‘lower’ levels without understanding the ‘higher level’ causes. The ‘top down’ nature of the Free Energy Principle/Bayesian Brain model seems to be what makes it strongly emergent, in the authors’ view:

The Bayesian computational model of brain function, also called the “free energy principle” (FEP) is… a paradigmatic exemplar of strong emergence (Lestienne, 2014). In this model, brain-environment interactions of an agent are represented as a loop in which the primary sensory inputs are first processed with prior knowledge of the most probable cause of these signals in a top-down fashion; the brain then combines prior and sensory information and calculates the posterior percept…

…This largely Bayesian hypothesis formulates perception as a constructive process based on internal models. As FEP is operated by a set of rules that are treated independently of underlying neurobiology and only loosely constrained (inspired) by metabolic anatomical/neural constraints, FEP can be considered strongly emergent.

Similarly, Integrated Information Theory as a model of consciousness is a strongly emergent theory because it holds that “emergent phenomena are more accurate descriptions of underlying reality” than reductionist approaches.

So what’s the problem with strong emergence? Turkheimer et al. are a little vague on this point, but by my reading, they question whether strong emergence provides any kind of real understanding or has any predictive power:

The paradigm of strong emergence seems not to have moved far from the perennial philosophical puzzle of emergent phenomena floating inconsistently over some unspecific physical substrate. The whole of the emergent phenomena still cannot be reduced or explained by its parts; thus, it follows that no change in its components can have a predictable effect on the whole.

This cartoon, which the authors reproduce with permission, seems to sum up their view of strong emergence:

magic

Turkheimer et al. go on to describe how a weaker form of emergence is a more appropriate model and they highlight recent work (some of it their own) that seeks to model brain function from the single-cell level up to behavior by taking coupled oscillators (representing pairs of neurons) as the fundamental units.

In my view, this is a provocative and interesting paper, but ‘strong emergence’ seems to be a bit of a strawman, here. I don’t know much about Integrated Information Theory, but the Bayesian Brain model, as I understand it, is based on a purely mechanistic model of the brain. It does feature lots of top-down information transmission (i.e. signals from ‘higher’ to ‘lower’ brain areas), but not in any mysterious sense. The top-down signals are modelled in just the same way as bottom-up signals.

Then again, I’m no expert on recent work on the Bayesian Brain, and perhaps it has strayed into more strongly emergent territory recently?

CATEGORIZED UNDER: papers, philosophy, science, select, Top Posts
ADVERTISEMENT
  • NeuroFidelity

    “I have to say that I don’t quite follow this distinction. Does anyone really believe that the brain is such a strongly emergent system that we could never, even in principle, ‘explain it though an analysis of interactions between its elemental units’? Apart from the units and their interactions, what else is there – unless we invoke dualism?”

    A lot of people (more than you may think) believe in dualism (although they disguise it through complex cognitive dissonance to deny being irrational).

    Some others claim that the interactions of the parts of the system are not parts of the system themselves. Therefore, you cannot say that the whole is *just* the sum of the parts. This is semantics, and therefore, a useless discussion.

    Finally, there are people who defend strong emergence but due to epistemological reasons. They acknowledge there is no ontological strong emergence, but for all practical purposes, we can presume there is a strong emergence.

    • TLongmire

      How about “experience” as proof of emergence? I truly believe the mind is entangled by the generated emanation of energy created by the brain/body in a spherical oscillation interacting with an infinite sphere where one can interface with any information fathomable if focused thru the underwritten laws of the holographic principle.

      • Michael Cleveland

        Word salad is so beautiful.

        • TLongmire

          It’s actually a graspable concept that becomes obvious.

          • Michael Cleveland

            TThen you will have to find a way to express it without resorting to fringe element jargon.

          • TLongmire

            When I see an explosion in slow motion I see an expanding sphere expanding in a complex purposeful way, distinct at an expanding horizon. Everything of that explosion can be represented through an equation considering distance from the origin.Mathmatics work but it’s not the best way to understand the forces at work but a rabbit hole to get you to the same place without fully grasping what is happening. Consider naively how you understand these symbols you “read”. The ideas are shown and separate.

          • PaulTopping

            Now the salad has some dressing but it’s still a word salad. Heavy on the woo.

          • TLongmire

            I’m describing your consciousness in multidimensional space in the here and now. Once humans and A.I. interface the concept holds true because it’s already happening.

          • Michael Cleveland

            You haven’t describe anything–at least not coherently. You might want to try again when you come down from wherever you are. Lay off the bug juice and pink powders (or is it golden mushrooms?) and you might be able to make some sense.

          • Kwame

            You might want to have a look at a few more explosions in slo-mo, there are multiple fronts of energy dissipation, and most of them aren’t spherical.

  • http://arturotozzi.webnode.it/ Arturo Tozzi cns

    “As FEP is operated by a set of rules that are treated independently of underlying neurobiology and only loosely constrained (inspired) by metabolic anatomical/neural constraints, FEP can be considered strongly emergent.” If you change the word “FEP” with “coupled oscillator”, nothing changes.

    • TLongmire

      If shortcuts are permitted then A.I. could be given the task of fully “knowing” itself within a sphere then observing itself on the basis of the holographic principle. When the inside understands the outside game is on!

    • Len Yabloko

      I agree. But many seem to think of FEP explanatory power is much more than Huygens clocks.

  • https://www.facebook.com/app_scoped_user_id/529166289 Martin Florén

    The connection between the concept of emergence and the Bayesian Brain is a new one for me, and it sure sounds interesting.

    The distinction between strong and weak emergence is a quite important theme in philosophy of mind. Basically, if conciousness is the result of strong emergence then free will is possible. This is because strong emergence implies that minds cannot be reduced to the simple laws governing physical entities such as e.g. elemental particles. If there is only weak emergence (or no emergent properties at all), then minds are simply part of physical chains of cause and effect – meaning that minds have no causal power, and hence that there is no such thing as free will.

    There is of course way more nuance to it than that. There’s a paper on it by David Chalmers that probably serves as a good primer for anyone who is interested: http://www.consc.net/papers/emergence.pdf
    I would say strong emergence is far from a strawman. There are quite a few people who would much prefer their emergence to be of the strong kind, and there are lots of proponents of this type of emergence.

    • Michael Cleveland

      This would be very interesting if there were such a thing as free will, but there (almost) isn’t. Only psychopaths can exhibit true free will. The rest of us will always be constrained by factors that have nothing to do with free choice.

      • https://www.facebook.com/app_scoped_user_id/529166289 Martin Florén

        I’m not sure I i follow you. If noone is an agent, i.e. makes choices, then there is no free will. Empathy and mortality has nothing to do with it. If the mind is governed by deterministic laws, then that applies just as much to psychopats as someone with a ‘normal’ brain, and vice versa.

        • Michael Cleveland

          If I tell you (a presumably normal person) to go rob a bank, you won’t do it, not because you choose not to, but because you are conditioned socially in such a way that you are incapable of doing it. You could not wantonly kill the first stranger who walked by on the street because your conditioning prevents it. You are the product of your genes. You are the sum total of the influences around you from the time you were born, and to some extent the influences of the people around you, who have been molded since they were born, for a considerable distance back in time. You are the sum of your likes and dislikes, what you had for breakfast this morning, the argument with your other–or the making up–the night before. You exist in a sea of collective influences and experience, past and present, that drive the decisions you make. You have a conscience. Conscience and true free will are mutually exclusive, but it goes much deeper than that. You are who you are, and if you are normal, you have no capacity to step outside yourself.

          • https://www.facebook.com/app_scoped_user_id/529166289 Martin Florén

            The issue here is not whether people are influenced by external factors, because of course they are. This also goes for someone who lacks empathy, by the way, although the specific factors might vary from person to person. For example, a psychopath might abstain from said robbery for fear of getting caught. But all of that is really beside the point. In the discussion at hand, ‘free’ does not mean ‘free of constrains’. Rather, it is about whether people could actually have acted differently than how they did in a specific situation.

            What is interesting about strong emergence is that it, according to its proponents, explains why we have this intuitive feeling of being in charge of our actions despite the fact that the universe around us operates according to laws of cause and action. In other words, it allows minds to be something more than mere collections of particles that behave in predictable ways because of factors external to the person. It allows minds to be the ultimate cause of actions, rather than one of many links in a causal chain.

            If the strong emergence thesis is shot down, then the idea of free will dies with it. Or, well, at least one specific version of the free will argument. If our minds can be reduced to simple physics, then I have no more agency than a pool ball or a satellite in orbit. This is something that goes against our subjective experience, and hence people might find it counter-intuitive. It also goes against many people’s idea of how morality and law works, i. e. that people are responsible for their own actions.

            There are of course other theories of morality. And not all people find the idea of non-free will problematic. But the question of wheter we can actually be the ultimate cause of our actions is quite an important one to a great deal of people. As the working of the brain are an integral part to that puzzle, research such as this is of quite some interest.

          • OWilson

            On the contrary, humans can throw off their norms and “concience” when faced with severe adversity, starvation, war, you name it!

            The capacity for stealing, looting and even killing for survival of self or family lurk not too far down in the human psyche.

            Even cannibalism.

          • Michael Cleveland

            You’re missing the deeper picture. They aren’t throwing off anything. Circumstances that demand survival behavior are among those external influences that drive our actions. Humans can throw off survival behavior, too, when there is a compelling reason, but it is never a case of truly free choice. It always derives in some degree from our programmed capacities. When someone “rises to the occasion,” that behavior does not just appear out of the blue, but from some innate capacity that derives from the totality of past experience.

          • OWilson

            Using that standard you can excuse any type of behaviour, from celeb shoplifting, to looting neighborhood mom and pop stores, to robbing banks, to murder for financial gain.

            Not to mention routinely lying politicians!

            It’s just some, “innate capacity that derives from the totallity of past experience!”

            OK! :)

          • Michael Cleveland

            It’s a quandary, isn’t it? Heroism likewise disappears. I’m not suggesting that people should not be held accountable for their actions, but if free will ever comes to be recognized for the myth that it is, then different rationales for punishing bad behavior and acknowledging good would have to be addressed. Not to worry, however. The myth is too convenient, too deeply ingrained in social consciousness to ever be overturned. No serious question of its validity will ever get beyond back room discussions like this–though I would try to avoid discussing it with hard core, far left liberals. Who knows what catastrophes might arise?

  • PaulTopping

    I have grave doubts about the validity of the weak vs strong emergence idea. Weak emergence seems to be about cases where we understand how the individual parts combine to produce the emergent behavior. Strong emergence is where we don’t. If the brain exhibits strong emergence, it just means we haven’t yet discovered how its parts combine to produce human behavior. If anyone can prove this is impossible, then I might believe in strong emergence. Until then, it sounds like yet another case of theorists despairing from ever knowing how the brain works and filling the vacuum with some sort of brain magic.

    • Victoria

      I have been being paid 86 bucks/hourly from working an on-line job from comfort of my home… My neighbor taught me the way in which she was averaging over 4k per month work she found from the internet… I felt very stunned and decided to have a go with it… Currently i feel so fortunate she demonstrated to me this, and would would suggest it to each person to give it a try… Here is what i do> EYE.CA/DhM

    • http://www.pbase.com/davidjl David Littleboy

      You wrote:

      ” If the brain exhibits strong emergence, it just means we haven’t yet discovered how its parts combine to produce human behavior.”

      ROFL. Exactly. There’s this great love for emergent phenomena, especially in many current AI-related folks. They _hate_ to even think about doing the work of figuring out what human intelligence is, how to implement it independently of brains, and how the brain implements it. They want “massively parallel” “neural nets” to magically display “intelligence”. (Despite the obvious point that current NN models can’t do symbolic reasoning and that there’s no way to make them do that. Oh, yes, and that current NN models don’t even vaguely resemble neurons in any way whatsoever. Neurons: spatially extensive, multiply connected (the average neuron makes 8,000 connections), compute non-linear functions. NN models: none of the above.)

      Sigh. (Your “filling the vacuum with some sort of brain magic.” is also spot on and lovely.)

      • Misha Monahov

        But artificial neurons compute non-linear functions too (e.g. ReLU is non-linear). Number of connections in NN models may vary and may resemble number of connections in brain (several hundred per cell in cortex, not several thousand, as far as I know). Spatial extent is probably not important, functional characteristics of connections is what matters. I agree that NN models miss some important properties of real networks but what you mentioned does not sound like important differences.

        • http://www.pbase.com/davidjl David Littleboy

          What I mean by “non-linear” is a given pattern of inputs turns off a neuron regardless of other inputs. ReLU is a different thing. And I don’t see any reference giving less that 7,000 connections on average (cerebellum may be less, or more. Maybe.). And I don’t see how spatial extent isn’t important: the ability to communicate with distant neurons is critical. The eye to the visual cortex in the back of the brain is only a few steps (with quite a bit of computation at each step, of course). Axons in general are enormously long compared to the size of the neuron cell body or the average volume of the neuron (if you are counting neurons per cubic mm).

          I should say, though, that I’m talking about NN snake oil salesmen. I have great respect for David Marr, neuroanatomy types, the blokes trying to simulate the 300 neurons in C.Elgans. My intuition is that it’s people looking at real neurons and trying to simulate real neurons, not people doing computation models based on regular arrays of computational units, who are going to make actual scientific progress.

          • Misha Monahov

            Your last statement is encouraging) Regarding details: number of connections that can be identified depends on identification method. In one study monosynaptic rabies tracing revealed about 400 inputs per neuron in cortex (Wertz 2015). It can be more or can be less but I’m not sure if the numbers are essential for understanding function. Distance of connections obviously affects the function but I still think it is not most important property, same signal can be delivered through long connection or through short one. What kind of signal is transmitted (e.g. what is the amplitude of PSP) between cells is essential, but physical distance between these two cells is not.
            You mentioned “symbolic reasoning” as one thing that artificial NN can’t do. What exactly do you mean? What kind of operation? Operations on variables (2x/x = ?) or what?

          • http://www.pbase.com/davidjl David Littleboy

            The length of the connections (and where they go) tells you a lot about the architecture of the system.

            The generic NN image recognition model is shown (a) pictures of starfish, (b) pictures of messy refrigerator contets, (c) pictures of people playing frisbee. But they have no concept of “physical object” In each of these cases the holograph-like pattern abstracted by the NN turns out to be essentially unrelated to what we humans think the images being presented actually were (one NN couldn’t recognize cows on a beach since there’s no green. This is a major oops showing that this idea seriously isn’t ready for prime time). Mental rotation is something psychologists have studied in humans, but it seems that NN models don’t have the concept of a “connected physical object”, and so, for example, fail to recognized a flipped over school bus. I find the idea that a NN can “recognize a cat” odd when it can’t deal with objects at all. That’s stuff kids do at a very young age.

            But, really.. We know a lot about how the eye works, and it doesn’t look like a NM in the slightest. It also doesn’t look like a camera, either. Tiny area of high acuity that does most of the work, the overall system builds a gestalt of the scene from snippets acquired with quick snappy eye movements (saccades).

            So the bottom line for me is that when most people say NN, they are talking about something that has next to nothing to do with how people work. Where did you lose your keys? Over there in the dark part of the bushes. Why are you looking here? There’s no light there.

          • Misha Monahov

            When you are talking about flipped bus, do you mean this paper -arxiv.org/pdf/1811.11553.pdf ? Do you believe that true object recognition is an example of symbolic reasoning ? If image classification can be made robust to translation and rotation of objects – will it qualify as symbolic reasoning?

          • http://www.pbase.com/davidjl David Littleboy

            Yes. that’s the paper. And, no, I mean that “object recognition” requires symbolic reasoning. NN models don’t/can’t do symbolic reasoning.

            Here’s my modest proposal for the field. Stop using the term NN, and use the term MRA (MLRAOLCCE being a bit long) for multilayered regular array of locally connected computational elements, since that’s what they are. Then you could ask what this technology actually does, and how those actual abilities could be used, rather than have to keep being told that it doesn’t actually do what keeps being claimed.

            Again, everything we know about how the eye actually works, and what people do when they look at scenes, shows that MRA sorts of things don’t have anything to do with that.

  • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

    Can one make sense of something unlike anything ever experienced or observed? Professionals in the arcane – organic synthesis, physics, EE, martial arts, carpentry, metal working, fine arts – usually hugely struggle to make things work. Then, suddenly one day, they grok it. Things work.

    Whatever that transition constitutes, it is not some programmed agglomeration that finally gets out the bugs. It is transcendence incarnate. When an AI awakens…it will not be a subtle thing. As in all the cases above, it will arrive hungry.

    To understand is to create not merely describe. Compare the Age of
    Enlightenment versus Socialism. The US was a miracle until 1965, a pleasantry until 2000, and is now a shambles. Socialism is the end of risk at the cost of pluralizing laughter.

    • TLongmire

      The answer to everything is 42 only because the 4&2 surround the unobserved “3”.

      • TLongmire

        Therefore the 3 are observing all as I see it.

        • Michael Cleveland

          Ah, pink powders, bug juice, AND golden mushrooms. 1-2-3. Got it.

          • TLongmire

            Actually golden mushrooms act as viscousiter of consciousness and the colorful fractals perceived are retracement from the ideal edge noticing faze shift.

          • Michael Cleveland

            Given that it was a made-up term for semantic effect, and that I’ve never actually heard of “golden” mushrooms, I can’t argue that one bit.

          • TLongmire

            Technology will exist in the future that allows full spectrum absorption of emanated energy. Every single aspect of a phenomenon will be understood on the edge of a sphere with that technology. That ability to know everything at a distance exists NOW thru conscious experience either because this a basic principle of our universe or aliens/extradementional are utilizing the technology. The mind is emergent and certain mushrooms prove it.

          • TLongmire

            The mushrooms work either by inhibiting the neural function of the stomach “drawing” a part of the emerging field of consciousness or it increases entropy within the brain not allowing the normalized field to be manifested.

          • Michael Cleveland

            Word salad again. Words strung together, New Age jargon, sounds very sophisticated to the writer; absolutely meaningless as communication; in other words, gibberish, pretend science.

          • TLongmire

            If you care about science then read about the “holographic principle” and see that I am describing it conceptually because I’ve seen it.

          • TLongmire

            “The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information “inscribed” on the surface of its boundary.” Wikipedia

          • Michael Cleveland

            The holographic principle is a metaphor used to describe the correspondence of the relationship between entropy and the surface area of the space that contains that entropy. It does not mean that the Universe is a hologram. Better information here: https://www.quora.com/What-is-the-holographic-principle-Does-it-mean-that-our-universe-is-a-hologram

          • TLongmire

            Black holes are a singular condensed, unobserved point of matter and scientists needed a way to understand them and discovered the holographic principle correlated to reality. A black holes event horizon is clearly spherical not flat or any other shape because all the information of a black hole is given by the area of its horizon, divided by 4, same as a sphere. The holographic principle is peculiar to anyone who grasps it but the underlying phenomenon is happening every where all at once. Ultimately the universe is a sphere where everything can be known at its surface and that scales down fractally to the Planck scale. Quantum entanglement can be better understood thru a spherical model explaining the Borsuk-Ulam theorem.

          • http://arturotozzi.webnode.it/ Arturo Tozzi cns

            Dear TLongmire, of course, you read my paper about the Borsuk-Ulam theorem and black holes, the one that caused me to have a discussion with the Nobel Prize t’hooft. However, a warning: the Beckenstein-Hawking entropy, and consequently the holographic principle, hold just for quantum dynamics, and not for the macroscopic world and the brain activity… a macroscopic surface does not encompass the entropy endowed in its volume…

          • TLongmire

            I read your other comments no paper. Consciousness is not confined to three dimensions and neither are my ideas. I saw it it’s the emergence field.

          • http://arturotozzi.webnode.it/ Arturo Tozzi cns
          • TLongmire

            Your papers are behind a pay wall and the only thing on your website that was obvious were the overlapping i’s. I gave my account of precieving higher dimensions 10 years ago on wether we are in the matrix. Surely a waste of your time to read but I know you aren’t wasting your time carrying out research on this subject.

          • http://arturotozzi.webnode.it/ Arturo Tozzi cns

            Here you are the full text of the paper that might interest you: https://arturotozzi.webnode.it/products/the-multidimensional-brain/
            If you want, send me a mail.

          • TLongmire

            Art(art)u(you)roT(rot)ozz(on/shifted in)i(I)c(see)n(in)s(scalarly)
            To understand you shift your mind correct?

          • http://arturotozzi.webnode.it/ Arturo Tozzi cns

            Dear Tlongmire, too many mushrooms…

          • TLongmire

            The entropic mind sees the horizons.

        • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

          I could carve a better mentality out of Velveeta.

          • TLongmire

            Probably but that would involve you lifting a hand and they know that won’t happen.

  • FSE

    Why are you assuming that all properties of the brain can be described as interactions between smaller units? It is possible that some properties cannot be reduced further.

  • Michael Cleveland

    The qualities of an iron rivet do not predict the Eiffel Tower. Does that make the tower emergent?

    • FSE

      If you know the properties of every rivet and beam in the Eiffel Tower, you can predict all its properties that engineers find interesting, like its weight, deformation at a given wind speed, etc.

      People assume that if you knew the properties of every neuron, synapse, etc in the brain, then you should be able to predict all the properties of the brain that neuroscientists find interesting. But that assumption is not necessarily true.

      • Michael Cleveland

        But you see, that’s after the fact. There is nothing in a rivet that predicts the structure of the tower, or even its potential existence. There is nothing in a single atom in a single cell in the human body that predicts Macbeth.

        • FSE

          Nobody disputes that.

          The question is whether a complete understanding of every cell in the brain is sufficient to fully understand the brain. That’s what the debate over “emergent properties” boils down to, and right now there simply isn’t enough data to settle this issue.

      • Beth Clarkson

        Weight, yes. But deformation at a given wind speed is also dependent on the relationship of the rivets and beams – i.e. the structure of the edifice which is emergent rather than something that can be deduced from knowledge of the individual elements. Put those same beams and rivets together in a different structure and it can have dramatically different properties from the Eiffel tower.

  • OWilson

    “Emergence refers to the idea that a system can exhibit behavior or properties that none of its individual parts possess”

    In that sense a human being is an Emergent System.

    A conglomeration of symbiotic cells, each differentiated to perform a specific function, producing a life form, that is capable of reaching the Moon!

    • PaulTopping

      I’ve seen that definition of Emergence but it trivializes its meaning. One can’t predict that a car can transport people from one of its bolts. In general, components do not share the important properties of the whole of which they are a part. A single neuron is not likely to be conscious.

      Emergence is only interesting when, knowing the properties of its components, one can’t predict the behavior of the whole. Sometimes the whole exhibits structure that can’t be deduced from knowing its components and how they all fit together.

      • OWilson

        Exactly!

        One cannot derive a moon landing from the examination of a skin cell, or a brain cell!

  • 7eggert

    There is no either-or, there is both: You can’t find Einstein thinking about relativity by looking at neurons (at least not without knowing about the concept yourself), but you can find people being speech-impaired by having a stroke. By looking at the first you can’t disprove the later and vice versa.

    Also I’m not sure weather there is a line between strong and weak emergence except by the amount of insight we have. If you have a bunch of robots randomly moving around, picking up objects when they are encountered and releasing them when encountering a similar-colored object, we may or may not deduce that they will tend to sort the objects by color.

  • David Burton

    I would not use the terms weak and strong, but rather identity maintaining and developmental/evolutionary. An established system exists (identity maintaining) through a nested series of interactions where each more complex layer is emergent (weakly), but there are also top down effects – think top predator in an established ecosystem. In developmental emergence you start with a system of simple things and more complex things evolve. I don’t think human beings could be predicted from the original single celled organisms. There are no top down effects here.

  • practiCalfMRI

    Just curious, did they apply their distinction to the emergence of Newtonian mechanics of cannonballs from the quantum mechanics of subatomic particles in a collider, or the emergence of biology from chemistry? Both of these look strongly emergent to me. (Terry Deacon, among others, has a lot of interesting stuff to say about these sorts of emergent effects. Nobody seems to have the answer, however, which is probably why we can have lots of debates over nomenclature and whatnot.)

  • FSE

    > Apart from the units and their interactions, what else is there – unless we invoke dualism?

    You don’t need to invoke dualism to be a non-reductionist, ie believe that there is more to a system than units and their interactions.

    Imagine you are studying a foreign language. If you exclusively study it at the most reductionist level possible, letters and phonemes, then you do not really understand the language. There are some properties that can be only defined and studied at higher levels, and in some cases are *completely independent* of the rules that apply to its subunits.

    • Len Yabloko

      If by “higher levels” you mean levels of abstraction then you are invoking dualism by claiming existence of these levels in mind but not in Nature. Are higher levels as real as lower levels?

  • William Herrera

    By many definitions of strong emergence, biochemistry is strongly emergent from physics. If so, why should emergence be a problem for certain other sciences just because they involve aspects of consciousness? What might those who dislike emergence in neuroscience really be trying to avoid?

    • Len Yabloko

      Emergence is only a “problem” in the sense of the lack of any other explanation. Science can postulate something as principle but only until it is understood, like Newton’s gravity force – it was not a problem in mechanics or even in physics in general. It was the problem only for those who insisted on explaining the mysterious action at a distance. But even for the others not curious enough it would sooner or later become the problem in a very specific way, or rather in many seemmingly unrelated specific ways. So magic in the end is not a practical approach.

  • Pingback: [BLOG] Some Monday links | A Bit More Detail()

  • Bridging the gap

    of course ‘emergence’ can be no justification for laziness. But emergent traits are very common in biology and information technology. Their emergence is a result of blind natural selection on certain traits. It can be reconstructed afterwards, but it is difficult to predict as the selective circumstances may have disappeared or are unknown.

  • MDelagos

    Putting the whole “emergence” debate aside, I have a problem with the basic assumption of the Turkheimer paper in that it wants to peg the Bayesian Brain model with the label of being a “strong emergence” model. I just don’t see it. Yes, the Bayesian Brain model is an application of the more generalized Free Energy Principle, used in the context of the brain. But to say it is strongly emergent because it relies on the FEP is like saying an aneurysm is strongly emergent because it relies on broader principles of fluid dynamics and structural integrity.

    I have seen multiple talks and read a few articles by and about the founder of the FEP, Karl Friston, and never have I ever heard him describe active inference, the functional output of his hypothesis, as being somehow “emergent”. Friston uses Dynamic Causal Modeling to describe the funcional connectivity of the brain but I don’t see where he implies at any point that some novel property then emerges that can’t be described in terms of the subunits and their interactions. It is very much a computational model based on the non-linear interactions of parts. Nothing new pops up out of nowhere in his theory, as far as I can see.

    I think Turkheimer et. al. are just looking for fodder for their own philosophical commitments. From their abstract: “Drawing on the epistemological literature, we observe that due to their loose mechanistic links with the underlying biology, models based on strong forms of emergence are at risk of metaphysical implausibility.”
    So they have already just decided that any theory that is not mechanistic cannot be metaphysically plausible? I think this may be a circular argument aside from the fact that the Bayesian brain hypothesis and the FEP seems to be mischaracterized. They are not, I don’t think, examples of “strong emergence”.

    Back to emergence, not that anyone would care what I think , but I used to have this argument with my wife, who would mockingly say that emergence is just a placeholder term to mean we don’t know how it works yet. Over the years, I have slowly come around to her point of view. I’m not sure calling a property “emergent” or not really matters or has any practical implications within empirical science.

    • PaulTopping

      I agree with your wife on emergence, though perhaps I wouldn’t mock. Until someone shows me how something can be emergent such that a non-emergent explanation is not possible by definition or by proof, I’m skeptical of its existence. I suppose we can label a system as “emergent” if it produces surprising behavior given our knowledge of its parts but it is nothing more than that. In particular, it is not a license to bring in supernatural forces to explain it.

  • PaulTopping

    Those worried about whether we humans have free will, and worry that physics says we don’t, might want to read Sean Carroll’s “Free Will Is as Real as Baseball” (http://www.preposterousuniverse.com/blog/2011/07/13/free-will-is-as-real-as-baseball/). As he explains, our everyday free will operates at a different level than determinism. It all stems from our inability to predict human behavior both in principle and practice. Every event and, therefore, every choice we make is unique, making “could I have chosen otherwise” a moot point. Even if the universe is determined, we have no choice but to play our role in it.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+