Can We Really Reverse-Engineer the Brain by 2030?

By Kyle Munkittrick | August 24, 2010 12:47 pm

Brainsplosion!Engineer, inventor, and Singularity true-believer Ray Kurzweil thinks we can reverse-engineer the brain in a couple decades. After Gizmodo mis-reported Kurzweil’s Singularity Summit prediction that we’d reverse-engineer the brain by 2020 (he predicted 2030), the blogosphere caught fire. PZ Myers’ trademark incendiary arguments kick-started the debate when he described Kurzweil as the “Deepak Chopra for the computer science cognoscenti.” Of course, Kurzweil responded, to which Myers retorted. Hardly a new topic, the Singularity has already taken some healthy blows from Jaron Lanier, John Pavlus and John Horgan. The fundamental failure of Kurzweil’s argument is summarized by Myers:

My complaint isn’t that he has set a date by which we’ll understand the brain, but that he has provided no baseline value for his exponential growth claim, and has no way to measure how much we know now, how much we need to know, and how rapidly we will acquire that knowledge.

The part which I have bolded is the central flaw of much Singul-itarian thought. They cannot show how technologies are changing exponentially or explain why that will allow whole brain emulation. Even if Kurzweil could give us the necessary data, exponential growth does not guarantee practical results, as I discussed in my post on progress in genomic sequencing. George Dvorsky, a Canadian futurist and colleague of mine at the Institute for Ethics and Emerging Technologies, defends whole brain emulation as a possibility, but hedges his bets on the timeframe:

Kurzweil’s prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we’re still likely heading down some blind alleys.

My own feeling is that we’ll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I’m pulling this figure out of my butt as I really have no idea. It’s more a feeling than a scientifically-backed estimate.

Since we’re pulling figures out of our posterior, I’m going to throw a guess from a futurist I really respect, Gene Roddenberry, into the ring: successful whole brain emulation will first occur in the twenty-fourth century. That’s when Data and his unique “positronic brain” was built in the Star Trek universe. Seems like a more reasonable time frame to me.

Why? Consider the state of current prosthetics and robotics. The most advanced robotic arms in the world, be they the iLimb or DEKA’s Luke Arm, are cumbersome, heavy, weak, clumsy, and delicate compared to a human arm. They lack touch receptors; texture, heat, density, and other basics of sensation are decades away. One of the only successful osso-integrated prosthetics is a set of what are essentially peg-legs of metal and rubber on a cat named Oscar. Don’t even get started with me on neuro-integration. In short, we’re still struggling with replicating knees and elbows, so how the hell is the whole brain a mere few decades away?

The answer to the titular question of this post would seem to be: nope. At least Kurzweil is adhering to the Law of Futurology!

Image by digitalbob8 via Flickr


Comments (11)

  1. It seems to me PZ is being a little harsh, and Kurzweil is being super optimistic.

  2. Brian Too

    I’m sorry, but Ray Kurzweil is uncomfortably close to Deepak Chopra in terms of his messages. I don’t want to say they are the same because that wouldn’t be correct. Instead Kurzweil is falling into an old trap and should know better.

    The whole field of Artificial Intelligence has been replete with wild exaggerations, timelines that had no connection with reality, and a certain form of self-promotion and hubris that is nothing short of breathtaking. AI has been disappointing nearly everyone for a solid 50-60 years now.

    Where is the “mechanical brain” that was talked about in the 1950’s? What came of the Expert Systems? Anyone remember the Japanese 5th generation computing system? Inference engines? Rules based languages? Frame-based knowledge systems? Neural nets? Analog chip-based systems? I could go on and on but the main point is that even when these projects had successes, those successes wound up being both modest and not transformative. Of anything. Their failures are much more common and impressive.

    The truth is that computers today are constructed nothing like the biological intelligences we know of. Computers are complementary, not supplementary to us. The fundamental understanding of how the brain works is woefully inadequate to even produce a conceptual model of it. Without that we aren’t even at stage 1 of being able to produce something that’s a reasonable facsimile. Nor are there any signs of us producing an entirely novel and alien artificial intelligence, not modelled on biological precedents.

    Kurzweil’s talk of the Singularity is interesting speculation, but little more than hype when you get right down to it. His timeline needs to be summarily dismissed. It’s no better than those wild-eyed loons with sandwich boards proclaiming “The world will end next Thursday at 4 o’clock!” Why bother asking if they mean a.m. or p.m., when the answer means nothing?

  3. There is one big problem with Kurzweil’s thinking, we have no idea of how neurons process information. Before we can say anything about how brains work we need to first learn how neurons work. Once that is done we then go on to learn how neural networks work and then we can start to talk about how brains do their job.

    Even then it still doesn’t mean the Singularity is on the horizon, because we’ll still be in the dark about sentience, consciousness, mind, and personality. Near as I can tell we are dynamic systems, not something you can make a recording of for transferring to another medium. In short, if we ever learned how to duplicate you, that’s all it would be, a duplicate, a recording of you and not you.

    Kurzweil is not only premature, he’s wrong regarding what he’s talking about to begin with.

  4. It’s odd that anyone assigns anything other than mild bemusement to Kurzweil’s pronouncements. Like everyone else, he has absolutely no idea when an artificial brain might become conscious. That’s because as yet no-one has the faintest idea what consciousness is. If we could work that out, we’d have something to aim for. Otherwise it’s like claiming that in 20 years we’ll be able to travel to an as yet unknown planet in another universe that we haven’t discovered yet.

  5. tim333

    re: “no baseline value for his exponential growth claim”

    It’s not true:

    ‘It took less than two years for the Blue Brain supercomputer to accurately simulate a neocortical column, which is a tiny slice of brain containing approximately 10,000 neurons, with about 30 million synaptic connections between them. “The column has been built and it runs,” Markram says. “Now we just have to scale it up.”’

    ( )

    If they double the number of neurons simulated every now and again then voila…

    The human brain has approx 100 billion neurons so 24 doublings and you’re there.

    “Once the team is able to model a complete rat brain—that should happen in the next two years—Markram will download the simulation into a robotic rat, so that the brain has a body”

    I don’t know if they’ll achive robo-rat but it’ll be interesting to see. The article is from 2008 and I don’t think they’ve done it yet.

    There’s an interesting documentary on the stuff

    Alan Kellogg, you might want to check it out as it shows maybe we do have an idea how neurons process information

  6. I think that the reason people react with anything other than mild bemusement is simply that they’re responding to a different subject matter than the one that Kurzweil is talking about. He’s simply saying that we can do a synapse-level simulation of the brain within 20 years or so, and that we’ll learn a bunch of stuff by doing so. When posed in this way, it is quite obvious what the baseline for the exponential growth is. As a prediction it’s very pedestrian, not the sort of thing you’d expect from a wild-eyed hypster.

    It is not an attempt to build an AI: it is a science experiment that should, amongst other things, suggest ways to build better AIs. And if it turns out that a synapse-level model of the brain doesn’t exhibit behaviors that humans perceive as consciousness, then that would be a very interesting result. But it wouldn’t mean that we can’t build the model.

  7. Wintermute

    Until they create a complete replica, body and all, of a rat that can complete mazes quickly, locate and enjoy cheese, exhibit the complex social relationships typical of rats, etc., any amount of canned demos of pretty synapses firing in a virtual neocortical column is just a very expensive and CPU intensive fireworks show. We have no way of knowing whether it’s just a lot of noise or actually replicating a brain until we can see it actually exhibiting some intelligent rat/human behavior.

  8. We could already have AIs that are as good as humans, but they are just too slow for us to realize. What if we had an AI that was perfect except it ran on a processor that was 1000 times too slow? That would mean it would take about 80 years to learn what the normal baby does in a month. So of course we would not think the AI was working, even though it was. But then, in 10-15 years when we get graphene and memristors and optical computing all the rest cooking and are atleast 1000 times faster, AI magically appears.

    If you think about it, even if we are only off by a factor of 10 we’d probably think the AI was dumb – it would take 10 years for it to get to the intelligence of a 2-year old, which is when we’d probably start noticing.

    Once we get past the necessary processing power (10^16) people can start complaining about how we don’t know what we are doing. Google’s network is probably that fast now or faster, although obviously most of its resources are devoted to other things.

    Build a machine with enough connections to have a model of the world in its head, and it will have the capacity to perceive the world. Then we’ll think it’s intelligent.

  9. me sirvio de mucho, gracias!

  10. thankssss just what I wanted!

  11. Nice article, I laughed at “we’re still struggling with replicating knees and elbows, so how the hell is the whole brain a mere few decades away?”. We’ve been studying the brain for years and yet we’ve only scraped the tip of the iceberg, I dont think we’ll ever completely understand the human brain for at least 100 years.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!


See More

Collapse bottom bar