Later Terminator: We’re Nowhere Near Artificial Brains

By Mark Changizi | November 16, 2011 1:43 pm

I can feel it in the air, so thick I can taste it. Can you? It’s the we’re-going-to-build-an-artificial-brain-at-any-moment feeling. It’s exuded into the atmosphere from news media plumes (“IBM Aims to Build Artificial Human Brain Within 10 Years”) and science-fiction movie fountains…and also from science research itself, including projects like Blue Brain and IBM’s SyNAPSE. For example, here’s a recent press release about the latter:

Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.

Now, I’m as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. Here are four flavors of chilled water to help clear the palate.

The Worm in the Pass

In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. In neuroscience it is the 300 neurons of the roundworm C. elegans that stand in the way of our understanding the huge collections of neurons found in our or any mammal’s brain.

This little roundworm is the most studied multicellular organism this side of Alpha Centauri—we know how its 300 neurons are interconnected, and how they link up to the thousand or so cells of its body. And yet… Even with our God’s-eye-view of this meager creature, we’re not able to make much sense of its “brain.”

So, tell me where I’m being hasty, but shouldn’t this give us pause in leaping beyond a mere 300 neurons all the way to 300 million or 300 billion?

As they say, 300 is a tragedy; 300 billion is a statistic.

Big-Brained Dummies

About that massive Persian army: it didn’t appear to display the collective intelligence one might expect for its size.

Well, as it turns out, that’s a concern that applies to animal brains as well, which can vary in size by more than a hundred-fold—in mass, number of neurons, number of synapses, take your pick—and yet not be any smarter. Brains get their size not primarily because of the intelligence they’re carrying, but because of the size of the body they’re dragging.

I’ve termed this the “big embarrassment of neuroscience”, and the embarrassment is that we currently have no good explanation for why bigger bodies have bigger brains.

If we can’t explain what a hundred times larger brain does for its user, then we should moderate our confidence in any attempt we might have for building a brain of our own.

Blurry Joints

The computer on which you’re reading this is built from digital circuits, electronic mechanisms built from gates called AND, OR, NOT and so on. These gates, in turn, are built with transistors and other parts. Computers built from digital circuits built from logic gates built from transistors. You get the idea. It is only because computers are built with “sharp joints” like these that we can make sense of them.

But not all machines have nice, sharp, distinguishable levels like this, and when they don’t, the very notion of “gate” loses its meaning, and our ability to wrap our heads around the machine’s workings can quickly deteriorate.

In fact, when scientists create simulations that include digital circuits evolving on their own—and include the messy voltage dynamics of the transistors and other lowest-level components—what they get are inelegant “gremlin” circuits whose behavior is determined by incidental properties of the way transistors implement gates. The resultant circuits have blurry joints—i.e., the distinction between one level of explanation and the next is hazy—so hazy that it is not quite meaningful to say there are logic gates any longer. Even small circuits built, or evolved, in this way are nearly indecipherable.

Are brains like the logical, predictable computers sitting on our desks, with sharply delineated levels of description? At first glance they might seem to be: cortical areas, columns, microcolumns, neurons, synapses, and so on, ending with the genome.

Or, are brains like those digital circuits allowed to evolve on their own, and which pay no mind to whether or not the nakedest ape can comprehend the result? Might the brain’s joints be blurry, with each lower level reaching up to infect the next? If this were the case, then in putting together an artificial brain we don’t have the luxury of just building at one level and ignoring the complexity in levels below it.

Just as evolution leads to digital circuits that aren’t comprehensible in terms of logic gates—one has to go to the transistor level to crack them—evolution probably led to neural circuits that aren’t comprehensible in terms of neurons. It may be that, to understand the neuronal machinery, we have no choice but to go below the neuron. Perhaps all the way down.

…in which case I’d recommend looking for other ways forward besides trying to build what would amount to the largest gremlin circuit in the known universe.

Instincts

It would be grand if brains could enter the world as tabula rasa and, during their lifetime, learn everything they need to know.

Grand, at least, if you’re hoping to build one yourself. Why? Because then you could put together an artificial brain having the general structural properties of real brains and equipped with a general purpose learning algorithm, and let it loose upon the world. Off it’d go, evincing the brilliance you were hoping for.

That’s convenient for the builder of an artificial brain, but not so convenient for the brain itself, artificial or otherwise. Animal brains don’t enter the world as blank slates. And they wouldn’t want to. They benefit from the “learning” the countless generations of selection among their ancestors accumulated. Real brains are instilled with instincts. Not simple reflexes, but special learning algorithms designed to very quickly learn the right sorts of things given that the animal is in the right sort of habitat. We’re filled with functions, or evolved capabilities, about which we’re still mostly unaware.

To flesh them out we’ll have to understand the mind’s natural habitat, and how the mind plugs into it. I’ve called the set of all these functions or powers of the brain the “teleome” (a name that emphasizes the unabashed teleology that’s required to truly make sense of the brain, and is simultaneously designed to razz the “-ome” buzzwords like ‘genome’ and ‘connectome’).

If real brains are teeming with instincts, then artificial brains also want to be; why be given the demanding task of doing it all in one generation when it can be stuffed from the get-go with wisdom of the ancients?

And now one can see the problem for the artificial brain builder. Getting the general brain properties isn’t enough. Instead, the builder is saddled with the onerous task of packing the brain with a mountain of instincts (something that will require many generations of future scientists to unpack, as they struggle to build the teleome), and somehow managing to encode all that wisdom in the fine structure of the brain’s organization.

The Good News

Maybe I’m a buzz kill. But I prefer to say that it’s important to kill the bad buzz, for it obscures all the justified buzz that’s ahead of us in neuroscience and artificial intelligence. And there’s a lot. Building artificial brains may be a part of our future—though I’m not convinced—but for the foreseeable, century-scale future, I see only fizzle.

Image: A connectivity chart from IBM’s SyNAPSE project.

 

“Mark Changizi is an evolutionary neurobiologist, and Director of Human Cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book, Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man.

CATEGORIZED UNDER: Mind & Brain, Technology, Top Posts
  • Pingback: Changizi News « Changizi Blog

  • http://new-savanna.blogspot.com/ Bill Benzon

    Hey, Mark, haven’t you heard? Any day now the web is going to wake-up and become a supersmart brain that’s going to out-think us seven ways from Sunday.

    Seriously, though, you’re right. We hardly know diddley about the brain. Thinking of the brain as a digital machine hasn’t worked out very well. For myself, my intuitive sense is that the brain is running some kind of very sophisticated analogue simulation of the world. And it’s doing it in a fluid medium of varying levels of viscosity layed in, say, a fractal-like pattern. The slower layers capture the longer time-scale events while the slower layers capture the short time-scale events.

    To paraphrase a remark Martin Kay once made about machine translation, it’s easy to learn a little about the brain and it’s easy to learn a little about computer programming. People who know a little of both are susceptible to blinding revelations about how we can simulate brains. Alas . . .

  • Matthew Bailey

    I too am somewhat skeptical of the “Any minute we’ll build a brain.”

    But at the same time, I am cautiously optimistic that we will succeed (beginning with understanding those 300 neurons of c. elegans).

    One of the reasons I hold out optimism, is that this is EXACTLY what I am in the process of going to school to eventually accomplish (studying Cognitive Computation and Neuroscience, with the hope that I can add some Engineering to that in Grad school to work on Cognitive Computational Architectures that will bring us closer to a Constructed (Artificial) Brain.

    There are some reasons to be hopeful, such as the fact that progress toward this goal seems to be exponential rather than linear.

    But as the article points out, there are also some very good reasons to be cautious and skeptical.

  • http://www.drfaltin.org Robert Faltin

    For a single cell organizm to survive it has to feel pain in order to avoid damage. A more complex multi-cell organism has to know anxiety in order to anticipate attack from dangers in its surroundings. Until I see an anxious, pain riddled computer I will not worry too much about Skynet.

  • http://www.changizi.com Mark Changizi

    Matthew,

    Great to hear. I, too, am not all sourpuss and black clouds. I think there are real paths forward for understanding the brain and building AI. Here are a couple pieces hinting at other approaches…

    - http://seedmagazine.com/content/article/humans_version_3.0/
    - http://www.forbes.com/sites/markchangizi/2011/05/12/what-should-we-unravel-next-after-the-genome-answer-the-teleome/

    -Mark

  • Kim

    I´m a computer engineering undergrad, and am running this very moment a genetic algorithm to select some (analog) circuit’s properties that I had no luck choosing myself. Evolving a logic gate seems fun, do you have references on this “blurry joints” creations?

  • Chris

    So the trick is to build a crappy circuit and plug it into wikipedia? :-P

  • Mephane

    I too think we are still far away from something comparable to the human brain. However, I also think that one day (not so soon however) we might construct a computer that, while completely different from the human brain, also emerges genuine intelligence and possibly consciousness. I don’t think the human brain is the only possible way for intelligence to emerge, it could very well be one of dozens of different paths evolution could have gone, and we won’t really be able to even fathom viable alternatives as long as we don’t even understand the one functioning sample we have.

  • Thomas

    Airplanes don’t fly the same way as birds do, but fly they do…

  • Brian Too

    I agree but for a different reason. Your reasons are all valid but secondary in my opinion. The AI field has been the one with the worst, the absolute worst track record in predictions of future success.

    Hubris, arrogance, and unearned ego have dominated AI from the very beginning. If you want to learn something from an AI researcher, ask them what their system does today. If you want to waste your time and learn nothing at all, ask them what their system will do in 3, 5 and 10 years from now. The answers will typically be laden with puffery and nonsense.

    The field of AI knows very little about how the brain really works, so the fashion these days is to put it down to “complexity theory”, as though that is an explanation. Build something complex enough and it will magically start working! That’s why Rube Goldberg machines dominate the landscape!

    Biological neuroscientists at least have a little humility about what they don’t know. AI researchers? They don’t even know what they don’t know.

  • http://sciencepolice2010.com strangetruther

    So true! An AI peak in book references shows at about 1988, as seen in the first image in:

    http://sciencepolice2010.com/2011/01/18/whats-your-impact-checking-out-google’s-books-ngram-viewer/

    …but of course AI had got stuck decades before. (Of course dogma, especially wrong dogma, never helps in creative endeavour!) One problem is that people think using models, often by comparison to things like machines they know well, but paradoxically often including human organisations, with “The department of this (and that)” all over the place. Bloody boxes make up most cognitive architectures, and although the body including the brain does indeed often operate according to boxes (and genetic mechanisms reflect the use of modules, so I’m afraid the eyes etc. are understood by some parts of the body as being for seeing, no matter how random some biology/evolution may seem; it’s to do with efficiency of control), sometimes boxes get in the way of understanding the brain. As #2 @Bill Benzon says so tellingly: “fractal”. Yeah – essential. Try spotting that feature in SOAR etc etc. Ditto for “fluid”, “simulation”, and the varying scales you mention in the last of your second paragraph. No project that doesn’t reflect that paragraph is worth considering as a potential brain. You also need GWT too but it’s not enough on its own. Perhaps the biggest problem is the way cognitive engineers keep trying to give their system instructions on what to do and how to do it. Every time you give an intelligent system an instruction you lower its intelligence (not talking about educating humans here though :-) ). Include no instructions for handling anything specific in your AI system. Only get it to do things indirectly by designing basic housekeeping operations: in other words do everything by “epiprogramming” – specific actions happen only as epiphenomena of the basic operations.

    Very good set of comments all round, I’d say. From main posting: “Instincts”, certainly. They need to be covered properly, and yes, pessimism and disappointment are suitable. I think two other problems are that while the inputs to and outputs from the brain can be understood by sticking in an electrode, and we know what the inputs to the sensory system and what the muscles and motor nerves do, we don’t know the input or output for the bits in between. Another nasty problem is understanding any parallel system. I tried to debug something I’d written that involved matrix operations (yes, I’d copied it from a book :-) ) and just debugging something where lots of different strands divide up then re-converge to give a final result, needs a lab approach, not the “bedroom programming/debugging” you can use when it’s just one simple thing after another. And finally, it’s more than just the parallel operations thing, since the parallelism can be not just lots of discrete clear things but clouds of neurons operating en masse.

    You can tell from the picture the IBM thing isn’t going to work since the controlling bits in the middle that you need for GWT are missing! Oh wait – they’re there, on the right. Anyway, IBM won’t get there first. Even Apple beat them in designing an effective parallel programming system, didn’t they?

  • Michael

    Thomas says it well – Airplanes don’t fly like birds, but fly they do…
    It’s also been historically true that understanding WHY or HOW something works isn’t necessary for it to work. Inventions have been created, solutions to problems found, without the inventor or solution finder knowing how or why the invention or solution worked. Sometimes knowing why you CAN’T do something is the only thing holding you back from actually accomplishing it. Can it be predicted that we will succeed in creating a facsimile brain in the next 10 or even years? no – i don’t think so, but that doesn’t mean that it won’t happen. Art and Science overlap sometimes, and I like to think that the solution to creating artificial intelligence will be found in that overlap, probably leaning closer to the Art than to the Science…

  • Michael

    Where it says (next 10 or even years) in the previous post, I had written (next 10 or even (left-angle-bracket)n(right-angle-bracket) years), not realizing that it would treat it as html. so please read that as – …next 10 or even N years…

  • http://mike-tanner.co.nz Mike Tanner

    The arguments are reminiscent of sort of things people were saying about heavier-than-air flight 120 years ago. You’ve identified some of the main technical problems, but then gone straight to “We can’t see how to solve them now, so we’ll never be able to solve them”.

    When there are obvious benefits of even partial success in such a project, it might be a little hubristic to claim it will never happen.

    I’d invoke Arthur C. Clarke’s dictum but you don’t really look elderly enough …

  • http://timtyler.org/ Tim Tyler

    We still can’t build an “artificial bird” – but we *can* fly!

  • zomg

    Okay total disconnect, but this article made me make a youtube search for ‘ornithopter’ and check this out: http://www.youtube.com/watch?v=d3iOWMhJDW4

    That is brilliant.

  • http://softwetware.blogspot.com Chris Hennick

    It’s also worth noting that the Wright Brothers, as bike mechanics, probably weren’t experts — even for their time — on how birds flew, and that a craft the size of a 747 would be impossible to build with flapping wings.

  • Vladimir Nedovic

    Completely agree. I am not religious by any means, but I have to say that I am often appalled at the arrogance of the human race. In this case the scientists. AI people (my breed), but also some neuroscientists.

    Two years ago, I listened to Christof Koch claiming that iPhone has some level of consciousness. But f#%k, the guy is so smart, it’s hard to refute his arguments. Nevertheless, what I think applies there as well as in the case of IBM and many comments here (e.g. in case of the bird vs. fly argument), is that you cannot really call it ‘consciousness’ or ‘brain’ when it is defined in such a reductionist manner (in case of brain, for example, with the tabula rasa assumption that Mark points out).

    Reductionism and banalizations are common in science, unfortunately. Of the people around which I was, I’d say about a half really believe it (i.e. are brainwashed), whereas the other half is aware of their ‘pragmatism’ – those sort of claims create the buzz and bring the publicity and of course, money. Good for them, but too bad for science.

    P.S. I remembered some positive examples as well. Amazing talk of Jan Koenderink, who took a very narrowly defined problem of shape from shading to show that it’s an impossible task for the brain to apply an inverse function and derive depth in that manner. Or James DiCarlo from MIT, who showed some simplified model of the neuron perform much better than state-of-the-art algorithms in object recognition. But people wouldn’t listen, it’s easier when you pretend not to know it.

  • http://twitter.com/johnradke jtradke

    @Mike Tanner: ‘but then gone straight to “We can’t see how to solve them now, so we’ll never be able to solve them”.’

    Where did he say we’ll never be able to solve them?

  • https://www.facebook.com/mindbound Noetic Jun

    The various forms of this argument have been discussed for years now. The entirety of his rhetoric seems to boil down to “there is still so much we don’t understand about the brain -> we won’t be able to build one in any foreseeable future”.

    Although it does look somewhat solid as an argument, I can see how it applies only to building a biologically realistic brain literally from scratch. This doesn’t apply almost at all to the different forms and approaches of WBE which seem to be by far the more solid, stable and practical approach for getting the good ol’ brain running on the chip.

  • Pingback: Bursting the Bubble of Human Intelligence | The Crux « Science Technology Informer

  • http://artisticlifestudios.com/ larry capra aka zenabowli

    At best, an artificial brain will be a ridiculously literal device; much like an extreme case of Asperger syndrome or something like “Rain Man.” I can’t imagine the subtitle perceptions of our integrated senses ever being infused into a mechanical device.

    Like how will they ever immerse a micro-chip with the complex and ever changing formula of hormones that flood the human nervous system; hopefully having a predictable capacity for good judgment, accompanied with an open minded sense for understanding and fair play?

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

The Crux

A collection of bright and big ideas about timely and important science from a community of experts.

About Mark Changizi

Mark Changizi is the director of human cognition at 2AI Labs and the author of several books, including Harnessed: How Language and Music Mimicked Nature and The Vision Revolution.

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »