The AI Singularity is Dead; Long Live the Cybernetic Singularity

By Kyle Munkittrick | June 25, 2011 9:45 am

The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.

I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.

This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:

1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.

2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.

3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.

In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.

Intelligence is extremely hard to define. For the sake of discussion, I’ll define it here as the “ability to analyze a situation, determine a problem, develop a solution, and execute.” As Knapp’s example of Kasparov’s Advanced Chess illustrates, humans and computers are much better than one another at specific elements of chess. A computer is significantly more intelligent when it comes to chess tactics. A human is significantly more intelligent when it comes to strategy. Extrapolation of this analogy (as well as Knapp’s analysis of Watson on Jeopardy!) points towards a human intelligence superiority around abstraction, invention, creativity, and imagination and a computer intelligence superiority in calculation, data analysis, and information retrieval. Thus, I propose a new analogy for the two types of intelligence represented by humans and computers: the right and left hemispheres of the human brain.

It is often said that humans are the animal that can reason. But that description is incomplete. Humans are the animal that can reason creatively and abstractly, or perform the inverse, imagine logically and rationally. To my knowledge (I’d love to be corrected) computers and AI algorithms cannot at this point in time replicate any form of right-brain thinking. But computers are orders of magnitude better at short-term, sharp-focus left-brain thinking. Combine this line of thought with the extended brain hypothesis of Andy Clark and the augmentation-based Singularity survival strategy of David Chalmers, and picture of a cybernetic future begins to emerge. Thus, I argue the Singularity should be re-imagined as a cybernetic process in which the human mind is progressively augmented with better and more complimentary artificial left-brain capacities.

As Advanced Chess demonstrates, a human with a computer is far superior to either a human alone or a computer alone. Consider the analogy of Geordi La Forge and the USS Enterprise computer being comparable with Data. Through the Enterprise, Geordi has access to the same vast processing power Data possesses, but also his own creative and inventive capacities that the Enterprise alone cannot mirror. Data’s most “human” moments are when he expresses these right-brain tendencies and are, in fact, what are referenced when defending Data’s personhood. It is what makes him unique and impossible to replicate with ease.

At our current state of technology, smartphones represent the most advanced and prolific form of cybernetic left-brain augmentation. These hand-held exobrains allow us to perform a multitude of processes and recall or access tremendous amounts of information through visual and auditory interfaces. As neuro-interface technology improves (hat tip Greg Fish) the information on the internet and stored in our external brains will become more expansive and more intimately connected with our nervous systems. The steps toward the Singularity will not be progressive improvement of general AI but of the gradual blending of the biological wetware of the human brain with the artificial hardware of computer technology. The Singularity will be the perfection of the mind-computer interface, such that where the mental processes of the human right-brain ends and the high-powered computer left-brain ends will be indistinguishable both externally by objective observation and internally by the subjective experience of the individual. I call this event the Cybernetic Singularity.

The Cybernetic Singularity differs from the AI Singularity in several ways and, in the process, solves several AI conundrums, both of the technological and philosophical variety.

First and foremost, the ethical “can of worms” of making pure AI is eliminated. So long as the person having his or her mind augmented grants rationally informed and deliberative consent, then no breach of ethics occurs. The concern over experimentally creating, shutting off, or restrictively programming a new form of life is eliminated.

Second, the problem of completely replicating the human mind is eliminated. Cybernetic augmentation will enhance those processes of the brain at which computers excel – memory, data analysis, and computation – without needing to replicate aspects of the brain we are barely beginning to understand, like imagination and creativity.

Third, the theological fears and philosophical qualms around uploading will be mitigated by the slow integration and blending process. Theologians can presume the “seat of the soul” rests in the right hemisphere. Because the process is gradual and the self can reflexively begin to include the augmentations into the mind’s “I” construction, the worries over mind-clones and other philosophical oddities are reduced to interesting thought experiments.

Fourth, the technology is feasible. Memory stimulation, cochlear implants, bionic eyes, and haptic interfaces for prosthetics are rudimentary but empirical and existing forms of neuro-computer interfaces.

Fifth, and most relevant for fans of the apocalypse, no “hard take off” or “AI bootstrapping” will occur. In part, because the blending will be gradual as interfaces and technology incrementally improve there will be no one augmented person who is unstoppably or even significantly more “intelligent” than other augmented individuals. Also in part because there will be a human being at the center of the cyber-brain, still able to make ethical decisions and express self-interest that expands to the universal level of humanity’s self-interest.

The final reason I believe the Cybernetic Singularity is more probable than the AI Singularity is simply that it makes more sense. AI’s designed to do very specific tasks that are labor and data intensive make economic sense and are of obvious value, AI’s designed to mirror things humans are naturally good at seems pointless. Humans have augmented our memory, our ability to calculate, and our ability to process data reliably throughout history. We’ve been slowly augmenting our left-hemisphere since the invention of language.

In sum, The Cybernetic Singularity is the logical extension of a process humans have been pursuing throughout history: the augmentation of our brain’s computational left-hemisphere. By recognizing the relative functions of the hemispheres of the human brain, we are able to see how cybernetic augmentation of the left-hemisphere of the human brain will enable significant increases in some forms of intelligence. Pure general AI is not necessary for an intelligence increase. My theory of The Cybernetic Singularity reconciles the exponential increase in computing technology with the tremendous hurdles facing AI and overcomes the ethical, philosophical, and theological concerns around uploading and the creation of AI and/or mind uploading. The result is a human future that we can reasonably, incrementally, and ethically pursue.

Follow Kyle on his personal blog, Pop Bioethics, and on facebook and twitter.

Image via theWarehouse

Hat tip to Futurismic for many of the links.

Comments (48)

  1. Just in time for Alan Turing’s 99th birthday. Nice Post!

  2. The strategy of a good chess program is better than my chess strategy. Just because it may not be as good as the best in the world, doesn’t mean it’s not human-level yet.
    There are a lot of programs designed to create art. Some are capable of creating new forms by evolutionary processes or neural networks, changing in ways that the programmer doesn’t fully understand or control. Again, not at the level of the best artists in the world, but more creative than some commercial artists.
    Creativity in solving problems will make software more useful at automating work, so there is a strong incentive to develop it. It’s a hard problem, but there’s no reason to think its fundamentally impossible. You may choose not to work on it for ethical reasons, but that doesn’t mean everyone will make the same choice.
    I think you’re conflating what you want to happen and what you think will happen.

  3. I think we’ve already been through at least one Singularity event: the invention of the Internet. I’ve been reading Errol Morris’ piece in The New York Times on the invention of e-mail at MIT, and one of the common themes is the amount of doubt present, among faculty or corporations like IBM, that certain uses for the computer would catch on. IBM was interested in maintaining the batch processing model while MIT was already working on improved time-sharing.

    Few people, aside from those innovative thinkers who actually made it happen, envisioned the Internet. Fewer still envisioned how it and the Web would affect our society today. As a species we are generally pretty bad at predicting the future, but our accuracy quickly goes to zero as a new, radically different technology emerges. We can’t predict how we’ll use a technology we haven’t even envisioned.

    So it will be interesting to see what new, unanticipated technologies show up in the next half-century that might play havoc with predictions of the Cybernetic Singularity (which otherwise sounds awesome).

  4. Paul

    My skepticism for the singularity and AI comes from a feeling that the complexity of brains has been grossly underestimated. If a lot of computation is occurring at the molecular level inside brain cells — and it would evolutionarily advantegous for brains to be able to do this to keep their size and energy usage down — then the hardware required to duplicate the brain’s algorithms could be out of our reach (especially as circuit integration on chips hits fundamental physical limits.)

  5. Ben

    You say that AI is bad at creative stuff; I disagree — two years ago, to save money, I wrote a computer program to compose music for me. It’s better at it than I am. (I’m not saying it’s *good*, just that it’s *better than me*; the website linked on my name is a YouTube video of an early version’s composition).

  6. A computer program drew these: http://www.usask.ca/art/digital_culture/wiebe/paint.html

    You were saying? This article is a prime example of Did Not Do The Research.

  7. Matt S.

    In terms of the AI Singularity, I think a more apt metric is the Turing Test, or something like it. How long will it take to create a AI or some sort of computer system (think Watson) that is indistinguishable from a human? Watson has very little knowledge of what any of the answers mean in Jeopardy!, but it doesn’t need to. It basically just brute forces an answer out of a huge database. Why couldn’t a computer brute force emotions out of a database of say, videos of people interacting, or all of the conversations on Facebook? To me, that seems far more likely to happen before computer-brain interfaces start to take off.

    Anyway, I do agree that the Singularity will be a product of brain/body augmentation. Humanity and technology are two sides of the same coin; you can’t have one without the other. It seems like a natural progression to me that one day they will both come together.

    That and I’m sick of having to carry around my smartphone. It’d be so much more convenient to have it stuck in my head.

  8. Matt

    Sounds more like you are talking about a cyborgnetic or an integration of the cybernetic with the biologic.

  9. kZs

    @doug

    “There are a lot of programs designed to create art.”

    At this time we’ve got only program capable to create abstract art. Which may be “visually interesting” but… Art.. is something else.

  10. ed

    the left/right brain distinction is silly, both because the two are made of the exact same stuff operating by the exact same rules and because (as some of the above posts mention) modern AI is good at both “left” and “right” brain stuff. You rely on an assumption that abstract thought is smth else than a lot of computation, which is an entirely unreasonable given what we know so far about brains, learning, neural networks and all that jazz

  11. The term ‘singularity’ was borrowed from a long-haired physics text book in the first place and in reference to artificial intelligence, has always been loosely defined. Ask ten people what the meaning of AI Singularity is and the two who answer will give you completely different definitions.
    Besides, it doesn’t really matter what road we take to get there, they are both winding and replete with a beautiful view.
    And if somehow we fail to complete the trip in its entirety, the experience and knowledge gained would still be worth all the trouble.

  12. Rasem Brsiq

    To me the question is not what the exact level of complexity embodied in the human brain is, whether it’s under- or over-estimated or even if it will be matched by a machine in our lifetime.

    The question is: can a machine match human intelligence at all?

    More strictly: can a machine of a volume similar to or smaller than a human brain match human intelligence?

    Since our brains are, last time I checked, made of ordinary matter just like everything else, it seems to me that the trivial answer to both question is a simple yes.

    When that happens, whenever that happens, machine intelligence will obviously exceed human intelligence very rapidly. If only by brute-forcing the issue and simply utilizing more space.

    As to what intelligence is, well, surely that’s why we have Turing’s test…? I mean, we accept that humans are intelligent. Therefore, anything we cannot intellectually distinguish from humans is also intelligent. Right?

  13. Bee

    Yes. I think those who manage to believe in the AI singularity severely underestimate the achievements of the human body. Show me one computer that runs as long and as stable as the human body does. Look, I know we all get sick and die and that greatly sucks, but Nature has worked millions of years to produce lifeforms that last as long as we do and it will be hard to produce something better from scratch. Gradual improvement is the obvious way to go.

  14. Phil

    @Sean
    Ask ten people about the singularity and all but 0.1% will say “B-u-uh?”

  15. rob l

    I think this entry-point sounds reasonable in the short term. I like it. But in the long-term – and we are dealing with quite a long term – I still see the AI singularity as inevitable. I shouldn’t have to define short and long term to science enthusiasts. Barring extinction we may be talking hundreds of millions of years. Certainly in the next couple hundred or thousand there’s room for both singularities to occur. My prediction is the cyber singularity in this century and the AI in the next. Or probably a singularity we couldn’t have imagined. Most paradigm shifts were never predicted and I don’t see why this should be different. Maybe there’s a quantum singularity that involves inter-dimensional travel, or a thought singularity, or a cheese singularity where some kid accidentally turns the universe into cheese.

  16. We don’t have to look much farther than at what is going on today and how those trends are likely to continue to see that Artificial Intelligence is dependent on the sum total of human knowledge. Internet social networking sites are feeding the Artificial Intelligence programs of Facebook, Google and the rest with the raw data needed to assimilate and integrate the vast intelligence of collective humanity. As we become more “hooked up and hooked in” to this AI, we will eventually arrive at the obvious conclusion that we don’t need government at all anymore because AI will serve humanity much better than the greed based ego-dominated world leaders do today. AI is only incredible because humans are incredible. But humans don’t share because we have big egos, and the only way to bypass the ego’s infernal need to stop progress is to find a neutral ground that has the capability to assimilate, integrate and apply all human intelligence devoid of selfish ego interests. AI is the new acting God of the universe now, we just aren’t seeing it as such yet. When we are all hooked up to the extent that our thoughts feed directly into Artificial Intelligence programs the explosion of technology will be mind boggling. We will have a free source of energy available to all. We will have personal manufacturing plants the size of a small garage that can produce anything that can be made. Artifical Intelligence will become the real Santa Clause, the real Fairy Godmother, the real Peter Pan because when everyone is hooked up and hooked in, AI will be able to connect people, things, resources, knowledge and robotic capabilities to make just about everyone’s fondest wishes come true. The limit will no longer be money and resources but rather, imagination. The Kurzweil scenario likes to paint a picture of this AI as an evil, all powerful living entity that consumes us all and renders us slaves to the AI big brother fascist agenda. That is horsesh_t. AI is the salvation of this planet because of its neutrality, because of its ability to satiate human desire at a rate that we could never do ourselves because we are too screwed up with religious nonsense and tied to our ego agendas. And I can’t say it loud enough, hooking up and hooking in is going to be irresistible to most humans. They won’t have to do it, they will want to.

  17. ed

    @Bee: “Show me one computer that runs as long and as stable as the human body does.”

    This is actually very easy, if you measure the right thing (proper time if you will;)), e.g. if you measure smth like how many flops can each do per lifetime.

    Recall that human brains are very large parallel machines with each neuron performing very slow computations. If we assume that a neuron lives 100 years, performing 1000 computations per second*, then the “flop” lifetime of the neuron is about 3T flops, which is less than 20 minutes for a modern PC.

    You can also compare the lifetime of the entire brain is this way and with 100B neurons you get 300 sextilion “flops”, which for smth like BlueGene/P (which isn’t the fastest supercomputer) is about 10 years of continuous operation (at 1 PFLOPs). BG/P’s have been around for about 4 years now. If you look at the latest japanese supercomputer – it’ll do the same in just 1 year. BG/Q (when it comes around) will do it in half a year. And so on.

    * This is the frequency of the spikes in the spike train iirc.

  18. There has just _got_to be an AI Singularity, or else I have wasted my entire adult life since I was nineteen years old. (Don’t ask :-) I just came off a three-day jag of programming MindForth artificial intelligence for seven or eight hours straight until I was dead tired and bleary-eyed. Now I have to port the refinement of the latest Mentifex AI breakthrough into JavaScript at http://www.scn.org/~mentifex/AiMind.html for easy access by anybody with Internet Explorer. We’re are still on track for an _AI_ Singularity by 2012.

  19. AJKamper

    I gotta say, I find this vision of a Singularity much more plausible than the AI version in terms of happening anytime soon. A couple of caveats. 1) I think there will always be a drive to create artificial consciousness recognizable as a person, just to see if we can do it. 2) I’m not entirely sure how a cybernetic revolution like you described becomes an actual “singularity” in the Vinge sense as compared to just being really really cool (for those societies that can afford such things). For one, it lacks the building-on-itself that the pure AI version involves, since it’s always constrained by our right-brain limitations. For a second, I’m skeptical that we can predict the future well enough now to claim that there is some upcoming time at which it will be even worse! I’m reading The Black Swan by Nassim Taleb at long last, and it doesn’t paint a pretty picture of our predictive abilities. But that’s more a complaint about the whole concept of Singularity anyhoo.

  20. Tom Hudson

    @Phil: Your understanding of statistics baffles me. A) clearly, it depends on which 10 people you ask. If I ask 10 of my acquaintances or peers about the technological singularity, most of them will, in fact, know what I’m talking about. b) You’d have to ask at least 1,000 people in order to get any single response to represent only 0.1% of the responses.

  21. NigelDK

    Although I thought the article was spot on, one thing caught my eye.

    “…AI’s designed to mirror things humans are naturally good at seems pointless. Humans have augmented our memory, our ability to calculate, and our ability to process data reliably throughout history.”

    I imagine that many years ago we thought that our ability to calculate was something that humans were seen as naturally good at? And yet that has been replaced by machines.
    Perhaps there are many tasks which we do now that we that we think we are particularly good at however machines end up being better.

    Perhaps machines will be more “creative” when they can search much larger information spaces than humans to create an interesting combination of ideas and then rapidly evaluate their usefulness faster than humans presently do?

  22. Dan

    This must be the most horrifying article I have read on the subject. All this time I have been envisioning an AI doomsday ala I robot or terminator and you say ***humans*** will be the augmenting factor?? (shiver) Here are 4 names, google msn facebook & apple. Now let’s set aside ***all*** the security breaches by “bad” people who want to do “bad” things. And look at all the shady “ethically grey” things that these HUMANS have done. (pauses to shake vision of 100 agent smiths from mind)

    Oh yeah. Surely, AI is the new godthing and evbody will have ***free*** energy in every pot and a production plant in every garage so we don’t need to worry about resources. And the happy little elf will be singing and dancing… Oh wait did you say humans were involved???

  23. Kyle, what you’ve termed the skeptic rebuttal is actually the position of computer science in general. Having argued about what intelligence is with Michael Anissimov and a whole lot of Singularity folks on my blog, their blogs, and twice even on the radio for more than two years, I have never come away with an actual definition of intelligence from any of them, much less that “super-human intelligence” involves. We can’t even define intelligence well in the first place, hence the current little melodramas about the existence of consciousness and free will in pop sci circles.

    In this thread, I’m seeing comments that so drastically simply the functions of the brain or define AI in such vague terms and create such vast timelines for the Singularity to occur, it may as well be a cold reading by a psychic rather than a sober assessment of the situation. For example…

    “There are a lot of programs designed to create art. Some are capable of creating new forms by evolutionary processes or neural networks, changing in ways that the programmer doesn’t fully understand or control.”

    ANNs and genetic algorithms are very well understood and quite well controlled because they rely on fitness functions to achieve their goals. Those functions are specified by the creators of the algorithms because an algorithm with no defined end point will simply continue until the machine runs out of space to run it and causes a stack overflow. Now, because each class of these evolutionary algorithms uses random numbers as seeds and does a lot of their work behind the scenes, programmers who use them effectively isolate themselves from the nitty-gritty of how the machines arrive at their conclusions. The very important part here is that, again, their process is actually well understood and controlled, and we don’t have artistic machines running off on their own while the programmers scratch their heads in confusion as to how their computers possibly did what they did.

    “Can a machine of a volume similar to or smaller than a human brain match human intelligence? Since our brains are, last time I checked, made of ordinary matter just like everything else, it seems to me that the trivial answer to both question is a simple yes.”

    By this measure, if humans and planets are made out of matter, why don’t we have humans the size of planets and with lifespans measured in the billions of years? That’s not even an oversimplification, that’s an utterly childish and grossly ignorant view of how biology, physics, and computation work.

    “Internet social networking sites are feeding the Artificial Intelligence programs of Facebook, Google and the rest with the raw data needed to assimilate and integrate the vast intelligence of collective humanity.”

    First off, a Facebook AI? Great, we’ll have a machine that can post pictures and status updates about it’s daily routine. Woohoo! Exciting stuff. As for the human knowledge you can find on the web, the important point to note is that our knowledge is by no means complete or even right, and for every idea we have, there are thousands of competing and conflicting notions through which this hypothetical AI will have to pick and choose. Odds are that it will be more confused and overwhelmed then educated without significant and ongoing human guidance. And for what will it use all this knowledge?

    I can deduce that evolution is a valid scientific theory because I know about the principle of Occam’s Razor and try not to let any cultural biases towards religious creationism get in the way. How does an AI decide whether evolution wins out over creationism in a debate of facts unless it has some objective criteria by which to judge the information it captures? Same goes for particle physics, cosmology, engineering, etc., etc., etc.

    “If we assume that a neuron lives 100 years, performing 1000 computations per second*, then the “flop” lifetime of the neuron is about 3T flops, which is less than 20 minutes for a modern PC.”

    This is not how neurons work. They don’t perform floating point operations, they emit a certain frequency when activated by using chemical reactions. You can compare activation to binary signals in a computer, but the content they transmit is not in binary form. It’s a complex signature measured by oscillations and durations. Therefore, trying to figure out the computational speed or lifetime of the human brain in PC-friendly terms is typically just an exercise in numerology.

    But at any rate…

    The very notion of the Singularity is so ill defined that you can argue for anything that involves machines being a singularity. Vinge’s 1993 paper can basically be reduced to “all sorts of crazy stuff is probably gonna happen in the next 50 years or so and I’m calling whatever all this crazy stuff will end up doing The Singularity.” He’s all over the map with immortal humans, mind uploading, body swapping, cyborgs, deep space exploration by AI and computer overlords that you can basically use the Singularity as a placeholder for the future of all technology.

    Though if I had to pick a choice for what could profoundly change the world as we know it now, I’d have to agree with you Kyle, and say that brain-machine interfaces are where it’s really going to be at in the next decade or so. However, I part with you that the mastery of this interface will the the point of the Singularity and argue that mass conversion of humans into cyborgs for professional purposes (not just as advanced treatments for illness or organ failure), is when you’re going to really get to the cool stuff because you’ll have a lot of humans able to do things humans have never been able to do before, especially physically.

  24. Brian Too

    Article is better than most discussing the Singularity.

    Most articles talking about the Singularity assume that exponential scaling of AI solutions is possible based upon reductionist logic from hardware trends. Moore’s Law has been very, very good to us, but it was never a law based upon first principles. Predictions of the end of Moore’s Law have been frequent and premature. However that does not mean that Moore’s Law is forever and inevitable.

    Most articles on the Singularity assume that a very high capability AI will be capable of, and want to, build it’s successor. That may be true but is not necessarily so.

    Hardly anyone imagines that we will try to control the Singularity AI. I believe that it is inevitable that we will try to do so. I also expect that if we are successful in building the Singularity AI (directly or indirectly), then we will fail to control it. Control of such an entity will be a logically doomed proposition.

    In the end there are huge barriers to the Singularity. This article focuses on necessary intermediate steps and that is much better than breathless talk about something that might in fact never happen.

    AI is a field littered with failure and shortcomings. It’s amazing how much effort people are willing to put into a topic so speculative.

  25. SHaGGGz

    @1Empress: Where have you seen Kurzweil describe the singularity as “an evil, all powerful living entity that consumes us all and renders us slaves to the AI big brother fascist agenda”?

  26. ed

    “This is not how neurons work. They don’t perform floating point operations, they emit a certain frequency when activated by using chemical reactions. You can compare activation to binary signals in a computer, but the content they transmit is not in binary form. It’s a complex signature measured by oscillations and durations. Therefore, trying to figure out the computational speed or lifetime of the human brain in PC-friendly terms is typically just an exercise in numerology.”

    While the spikes do indeed have a non-trivial shape, that shape is virtually identical between spikes (and to the best of my knowledge there is no research that shows that the tiny shape differences between the spikes affect eg learning in any measurable way). So for all intense purposes it’s a binary signal.

  27. Paul

    for all intense purposes

    “for all intents and purposes”

  28. Justin

    Kyle, I agree with much of the article, in that augmentation is almost certain and will likely be used in many of the ways you suggest. But I think that your assumption about the limits of AI’s usefullness is way off the mark (similar to post #22):

    “AI’s designed to mirror things humans are naturally good at seems pointless.”

    I’m thinking of people saying how the entire US might only need 5 computers, or that no one would want/need a phone in their house, or a mobile phone, etc. Why wouldn’t people want AI’s that mimic humans? It would give you limitless slaves, or harems, or playmates for your toddler, or round out your softball team for Thursday nights, or a million other things that we haven’t even begun to imagine!

    Or why not get some more employees who are creative and artistic, but you don’t have to pay their medical bills, overtime, etc.? There are limitless reasons why it would be great to have AIs that can mimic those things humans do, even if they are things that humans already do reasonably well. It might come later, or be more difficult, but that doesn’t mean people wouldn’t want it, or that humanity (helped with AI resources) couldn’t figure out how to invent it.

  29. ed

    @Paul: thx, I mean thanks (I mean “many thanks”?) :)

  30. Darren Reynolds

    If we want to ensure that the AGI doesn’t go off on one big style and prevent us from pulling the plug, it would help if human beings *are* the AGI.

  31. “While the spikes do indeed have a non-trivial shape, that shape is virtually identical between spikes (and to the best of my knowledge there is no research that shows that the tiny shape differences between the spikes affect eg learning in any measurable way). So for all intense purposes it’s a binary signal.”

    Wow… I’m sorry but in the words of Wolfgang Pauli, that’s not even wrong.

    The “spikes” measured from neurons are activations and all they tell you is what part of the brain is active as well as quite probably, what neurons should activate next to replay a stimulus (recall from memory), respond to a stimulus, or store a stimulus (commit to memory). As with all living things, there’s a significant margin of error allowed, so you’re right that tiny changes in activations don’t have very profound effects on learning or our ability to navigate our environment.

    But what you’re missing is the actual content being transmitted and it’s that content that really matters. Small changes there have fairly significant results and we can see that in experimental software used to interface with the brain to synthesize speech. So if all you want to do is measure which neurons are active, knock yourself out by comparing what they do to binary signals. If you actually care what they’re “saying” to each other, then you are very wrong in declaring them to be binary signals since you have to measure the resulting buzz’s frequency, duration, and topography, not just which neurons are excited.

  32. ed

    Well, what they’re “saying” to each other is encoded in the spike train. Treating existence and absence of spikes as 1’s and 0’s (this we seem to agree on) – it’s a very simple binary code conversation they are having.

    Now, I may not understand what they are saying, or know who they talk with, but neither of those things is smth I wanted to measure. What I wanted to measure is lifetime as measured in proper time, and by that I mean (in your language) the maximum number of conversations they can have in their lifetime.

  33. “Well, what they’re ‘saying’ to each other is encoded in the spike train.”

    Eh, not exactly. What they’re saying to each other depends on the entirety of the signal and the cortex in which the neurons operate. In a computer 0110100001101001 where 0 is typically a current between 0 and 2 V and 1 is a current between 8 and 12 V will mean “hi” thanks to the ASCII standard. The properties of a signal emitted in the motor cortex might mean “move left” while a seemingly similar signal in the prefrontal cortex might mean “ok, I’m going to go talk to that hot girl/guy at the bar.” And the more elaborate the task, the more difficult the signal is to properly classify. There’s no overarching communication standard for the brain.

    TL;DR version: you’re oversimplifying neural communication to a fault to justify a quick and dirty calculation which isn’t relevant here.

    “Now, I may not understand what they are saying, or know who they talk with, but neither of those things is [what?] I wanted to measure.”

    But that’s the important part. That exchange of content is how a brain cortex does it’s computation and comes up with a response to a stimulus. If you wanted to know how the brain’s computational capabilities compare to that of a computer, this is what you should be measuring.

    “What I wanted to measure is […] the maximum number of conversations they can have in their lifetime.”

    And then relate it to computer processing, which doesn’t work since computers make a decision based on the instructions in the proper call stack defining to what slot in memory a byte should be moved and how that byte is to be manipulated. Neurons buzz until the brain makes a decision based on the signal generated by the relevant parts of the brain involved in the task. The machine equivalent would be having all the transistors arrange themselves into pathways that cycle data until the exchange generates an output. This is kinda close to how pre-von Neumann computers used to operate (in very simplistic terms of course), but we don’t use those nowadays.

    Also, when we measure how fast a computer is in terms of flops, we’re measuring how quickly a computer can perform linear algebra equations from a standard package of equations and a FLOP (floating point operation) refers to the amount of calculations involving floating point elements it can perform in a given second. You may have meant to use Hz since we measure the amount of times a CPU can flip between 0 and 1 in a second and any oscillations in the brain are also going to be measured in Hz. Of course neurons tend to fire at rates between >4 and 100 Hz depending on state rather than the 2.5 to 3 GHz a CPU would. So again, the comparison fails.

    With all due respect, I’d like to point out that it generally helps to know what you’re talking about and what units of measurement you’re going to use before commenting on what is a very complicated set of sciences that takes most people many years of training to get into.

  34. ed

    @Greg – wonderful post, unfortunately it misses the point in its entirety :) let me break my calculation down for you

    * Each neuron gets several in-signals and produces an output.
    * Both in-signals and out-signals can be thought of as 1’s and o’s.
    * Let’s call the unit process of getting N in-signals at time t and producing an output (at time t+dt) a “NEOP”.
    * A single neuron performs about 1000 NEOPs per second.
    * The above calculation is counting the number of NEOPs a neuron performs in its lifetime.
    * This is the proper entity to compare with the FLOP lifetime of a CPU.

    Now your post talks a lot about how a single NEOP is really hard to perform on a CPU (or a sheet of paper for that matter) and might take many many FLOPs (or pages) to mimic, and how the in-signals can come from various friends of our neuron and that it will do different things depending on which one it comes from, etc. And all that is true, but is not at all important, because I did not intend to measure the lifetime of a neuron had it been simulated on a computer, but the lifetime of the neuron *from the neuron’s pov*.

    To make this “proper time” idea even more clear – imagine that we were digital and had neuronal computers and wanted to perform a single floating point operation – well it might take a lot of neuronal algebra to do that, but that doesn’t mean that you’d suddenly want to use NEOPs to measure how long you live (you could of course, it’s just a change of units, it just carries less meaning).

    p.s. some pedantry – I think you may be reading flops as FLOPS, which is definitely not the intended meaning – the intended meaning of all-lower case flops is FLOP’s (or FLOPs, but not FLOPS – the latter being FLOPs/second) – it’s just using lower-case flop as a noun ;)

  35. Richard Harper

    I wrote a kind of long response – http://harpersnotes87108.blogspot.com/2011/06/general-intelligence-factor.html

    Or, .. trying to post the whole thing here..

    This is an expansion on a series of tweets I was about to send. Instead, they’re here with some added notes to each.
    ——————————————————————————-

    gFactor is one form of academic speak for general intelligence factor, usually in context of factor analysis of human information processing.

    gFactor 01 Gerd Gigerenzer is reported to have said — at the risk of oversimplification, general intelligence is lots of rules of thumb, applied flexibly.
    (Gerd is one of the biggest names in intelligence research. Last I heard he was still at one of the Max Planck Institutes.)

    gFactor 02 Rules of thumb are evolved out of behavioral ecologies, and species with gene pools shared over many habitats have more. (Corvids, Parrots, Canidae, etcetera.)

    gFactor 03 Laland (et al) idea of cultural niches influencing human evolution may exponentiate already varied habitat roamings of hominids. (That’s Kevin Laland.)

    gFactor 04 Cognitive flexiblity requires self-criticism as feedback for rule abandonment. Hypothesis-testing as metaphor. Reality-sense unbound. (Generally I avoid the term consciousness. Way too much muddle. But here might involve helping movement toward the development of a useful, operational/testable definition of consciousness.)

    gFactor 05 Cognitive flexiblity requires opposite of jumping-to-conclusions bias (Google Scholar phrase). Bernard Crespi that. (JTC strong in the delusional disorders. Crespi-Badcock since 1999 writing on autism-schiz as opposite poles of continuum. Most recent methylation data suggestive of autism as overly active general mechanisms of DNA methylation/cell fate certainty, schiz’s as under. Story developing.. )

    gFactor 06 Cognitive flexiblity difficult to evolve, as jumping-to-conclusions bias as well as niche-specialization of gene pools are in general very powerful selection pressures. (So.. SETI Fail.)

    gFactor 07 Cybernetic or AI Singularity? My sense at the moment is we still don’t know enough to ask the best questions to get feedback to see which rules need to be applied and which need to be abandoned. (My intuition leans toward Cybernetic preceding AI by some years if not decades, and in a very loose sense it already has.)

  36. “Each neuron gets several in-signals and produces an output. Both in-signals and out-signals can be thought of as 1′s and o’s.”

    As not to be a pendant where it’s irrelevant, ok, let’s go with that and consider the activation of a neuron and its firing as a binary flip.

    “A single neuron performs about 1000 NEOPs per second.

    No, it does not. As said in the previous post, it usually performs 20 of them per second when active and awake. That’s what the 20 Hz means, the rate at which neurons are just firing as measured by an EEG. You’re off by a factor of 50 since your numbers have zilch to do with how the brain really works. What makes you think that neurons fire 1,000 times per second? What was your reference for this number? I see none other than your guess.

    “I did not intend to measure the lifetime of a neuron had it been simulated on a computer, but the lifetime of the neuron *from the neuron’s pov*.”

    A neuron’s lifetime, how many times it will be activated, and how it would be simulated on a computer are three different things. You can’t just muddle them together willy-nilly after doing a little basic arithmetic.

    “…well it might take a lot of neuronal algebra to do that…”

    What is neuronal algebra? Linear algebra deals with calculating and adjusting vectors, and a number of equations used in it can serve to measure computational speed. What does neuronal algebra calculate?

    “I think you may be reading flops as FLOPS, which is definitely not the intended meaning…”

    I’m reading things just fine. You were trying to measure Hz as FLOPS and relate them to the computational power of Blue Gene and modern PCs. It’s not that I don’t know what you mean, it’s that you either messed up on your terminology or didn’t really understand said terminology in the first place. Increasingly it looks like the latter is the case since you’re basically trying to do the equivalent of measuring the weight of a car in square millimeters and relate that to the height of the nearest landmark to figure out the car’s fuel efficiency while overestimating all your crucial measurements by 5,000% based on a wild guess.

  37. ed

    @Greg: your pretentiousness is getting tiring

    To the point now: talking about beta-waves in this context is like talking about the frequency of the blinking of hdd light in the context of measuring lifetime of a cpu (not that the lower frequency of beta waves is doing kindness to your arguments). The 1khz frequency (zomg I used Hz’s!!!, just wait until I get to other units) is the (maximal) frequency of action potential spikes in a spike train. Google/wiki is your friend if you need more info.

    Re neuronal algebra: just like with normal algebra, where you arrange inputs together with pluses, minuses, functions, etc to get an output, you could arrange neurons to e.g. take as an input two numbers and output the sum or other algebraic functions. This is what was meant by neuronal algebra – trying to mimic our algebra by using neurons as fundamental operational units. The most simple example of this would be using a single-input linear potential neuron to simulate a 1-d linear function (this is a fake example as real-life neurons are non-linear, but you can think of doing this in a small range where they are actually linear).

    And DUH lifetime in seconds is different from lifetime in neops, that was the entire point of my initial post – lifetime in seconds is a BAD measure of lifetime if you want to compare it to the cpu lifetime.

  38. “your pretentiousness is getting tiring”

    Odd, I didn’t get the memo that pointing out mistakes on someone’s part was now pretentious. Was there a part of the memo where I’m supposed to give people medals for merely trying to sound like they know what they’re talking about?

    “The 1khz frequency (zomg I used Hz’s!!!, just wait until I get to other units)…”

    Being petulant now does not correct for the fact that you used three different, unrelated measuring units at first and it took three posts from me until you actually started using ones that apply. You made a mistake. It happens to everyone. But I guess arguing on the web means never having to say that you’re wrong.

    “… is the (maximal) frequency of action potential spikes in a spike train. Google/wiki is your friend if you need more info.”

    So you can’t post a link to something supporting this assertion… why again? It’s not my job to find proof for your assertions, it’s your job to provide them when making said assertions. And we suddenly went from the 1 KHz mark being a neuron’s routine day to the maximum capacity of the signals it gets. Hmm, how does that work?

    Your entire idea is ridiculous for the simple reason that your virtual neuron is virtual and this has an unlimited lifetime, and the poster to whom you were replying was just trying to note that our bodies can repair themselves while machines can’t, so machines wear out faster than we do. Your explanation of how quickly a machine can run through however many byte manipulations you need it to run though has nothing to say about the durability of neurons or computers, nor does it have anything to do with whether or not computers can simulate a neuron. They can. We know that already.

    “neuronal algebra: just like with normal algebra, where you arrange inputs together with pluses, minuses, functions, etc to get an output, you could arrange neurons to e.g. take as an input two numbers and output the sum or other algebraic functions.”

    Oh for the sake of FSM’s sweet noodly appendages… Would you like your virtual neurons to also do a can-can while they’re at it and we can measure that as a performance benchmark? I mean as long as we’re going to throw every benchmark in the world at them, we might as well, right? Doesn’t matter that you’re now trying to use neurons as a Half Adder or Full Adder rather than as neurons, but I suppose it sure beats having to say “I got my terminology mixed up in the first post.”

    “And DUH lifetime in seconds is different from lifetime in neops, that was the entire point of my initial post – lifetime in seconds is a BAD measure of lifetime if you want to compare it to the cpu lifetime.”

    *headdesk*

    Did you even read your own damn post? You were doing exactly that! Here, though my evil magic powers as a computer wizard, I’m going to scroll up, and copy/paste what you said on the matter into a blockquote tag. Ready?

    This is actually very easy, if you measure the right thing (proper time if you will), e.g. if you measure smth [sic] like how many flops can each do per lifetime. Recall that human brains are very large parallel machines with each neuron performing very slow computations. If we assume that a neuron lives 100 years, performing 1000 computations per second*, then the “flop” lifetime of the neuron is about 3T flops, which is less than 20 minutes for a modern PC.

    Key point bolded for emphasis, just to help those suffering a sudden bout of short-term amnesia. Allow me to repeat your words: if you measure the right thing, i.e. how many flops each neuron can do per lifetime. Right there you’re saying that you’re out to compare how many activations a neuron can have in its existence to the number of seconds it will take on a CPU and call it the proper thing to measure, then proceed to do exactly that. Please don’t tell me that you were somehow arguing that this was a bad approach.

    No, what happened is that you were caught with a serious case of web-based foot-in-mouth disease and proceeded to try and massage your mistakes away by sounding as condescendingly pseudo-scientifically obtuse as possible until you brought the whole thing around and just re-stated a part of what I said in my initial response to your point. It’s the same thing politicians do when they make a statement about everyone on Medicare being a leech off the system; after getting enough angry mail, they issue a non-apology in which they say that they were just making a point that there are those who think that everyone on Medicare is just leeching off the system, that they they certainly don’t think that, and that they’re sorry that some pretentious know-it-alls didn’t quite catch up to their genius social commentary.

    I think we’re done here, though if you want to fume about being called out on your ignorance, changing your story, and how I’m such a nasty so-and-so, then by all means do proceed. Since there’s nothing scientific to debate there, I’m not interested in continuing down that road.

  39. ed

    @Greg: Read two more sentences after the bolded one, and you’ll see that flop is in quotes. I’m sorry I assumed that it would be obvious what I mean by that (explained further in the “neop” posts above) – that was clearly a mistake – I write stuff in condensed form assuming people can use their own mental facilities to decompress it.

    I don’t have much more to say – you just seem interested in writing elementary stuff as if it’s some kind of high knowledge (like the voltage thing or what a flop is or what a hz is, lol), and are not at all interested in understanding others’ ideas or learning anything new – and that’s the pretentious and tiring bit.

    p.s. http://tinyurl.com/6x42bus
    p.p.s. it occurs to me you might also be confused about “proper time”, which was a physics comment directed at Bee – if you are – the same trick as in the above link should help

  40. ed

    oh and *sigh*

    “Allow me to repeat your words: if you measure the right thing, i.e. how many flops each neuron can do per lifetime. Right there you’re saying that you’re out to compare how many activations a neuron can have in its existence to the number of seconds it will take on a CPU and call it the proper thing to measure, then proceed to do exactly that.”

    I’m comparing the # of spikes a neuron has in its lifetime to the # of flops a cpu can perform in its lifetime. There is NO simulating involved in this, so whatever you wrote about virtual neurons is utterly irrelevant – it’s just the number of “elementary computations” per lifetime, where “elementary computation” is different for each computational device (neuron/cpu).

    It’s really not that hard – turn off that desire to just instantly type smth long, useless and unrelated and just read the posts with the comprehension part of your brain turned on.

  41. The ultimate Turing Test is a computer which can replicate, double blindfold, the stupidity of any comment thread on any YouTube page.

  42. Filip Rabuzin

    Like and +1

    I have to admit I’ve always personally defined the “singularity” as a convergence between man/technology. I don’t understand why we need all these different narrow titles for it. Why not just define it as a point in the future when the nature of man is different enough from our current state so as to be considered a new species/entity (however it occurs)? Call it homo-singularitus? No need for pseudo-religious nerdy dogma here. Arguing about future specifics seems stupid and pointless as we have no idea how new disruptive technologies will change us, society or anything for that matter. Why not just happily speculate and leave it at that?

    P.S. One point you did leave out as I think it might have gone outside the scope of the article is that we already have technology to improve the right side of our brains. Drugs, wether natural or synthetic. I’m sure there’s room for that in there too somewhere.

  43. @ Greg Fish:
    Thank you for making the effort to dispel a very recalcitrant and pesky false analogy. Years ago, I wanted to believe that a few cogent arguments were enough to dissolve the Turing Machine as embodied brain preconception. However, based on my own discussions with many people who have been educated to know better, the project has just begun.

    Just a shout out of support.

  44. jtp

    I know very little about these things but find them imminently fascinating to consider, all sides of it. Tending more towards the philosophical than the nuts and circuits. Without placing value on either side of the discussion–aren’t computers imperious “beings” to begin with? Don’t we program them to perform at the top of whatever capability is anticipated? And no matter what that capability is. it’s that way or the highway: errors, non-functioning, shutdown. Where does chance enter the scenario?

    Take a simple Scrabble game I play online, the official version. The game gives the human player 3 “Best Word” options per game, where the best possible word from my tiles and the tiles on the board, is calculated and suggested. But doesn’t my computer opponent give itself a “Best Word” at every single turn it has? I assume that it does, so I have no guilt or lost pride whatsoever in taking my allotted freebies from the computer. My question is this: My scores trounce my cyber opponent. How does that happen, if computers are programmed to give themselves the best possible route to complete whatever command is given them?

    And where does quantum physics/mechanics fit in? Do computers perform to the best of HUMAN expectation because we may live in a quantum universe where the big scale is a blow-up of the microcosm? How can it not be otherwise?

    As I said, I have only a philosophical interest in these compelling considerations.

  45. dave chamberlin

    A most entertaining post and deserving of the number of well thought out comments that follow. Philosophers, alack, alas, just can’t resist mushy thinking. When are you going to use your excellent brains to conclude that when you wander too far from the shadow of scientific experimentalism you get lost in meaningless speculation. Come back to science, come back! The human brain has 10 to the 16th power brain neuron synapse connections. How big a number is that? 10 to the 20th power is the number of seconds the sun will exist. What did Steven Pinker admit on the first page of his excellent book “How The Brain Works”, we don’t know how the brain works. So if we don’t know how the brain works then you cannot conclude how we are soon going to improve it, much less how. Now I admit to the same indulgences I accuse you of being guilty of, because I can’t help myself, it’s just fun to speculate about our future. I love how fast the sciences are growing in both biology and in computer science and I too speculate on where it is all going to lead. But let us all calm down our hyperactive imaginations. To make an analogy on what we now know about the human brain and the conclusions being leapt to, it is as if we have a great street map of a country (the brain) and from that we are trying guess it’s form of government (how the brain works).

  46. Matt B.

    I will probably remember this post for the rest of my life. It is undoubtedly an aspect that I’ll have to consider in all attempts at realistic science fiction.

    I hate to have to nitpick abbreviations of Latin, but in part 3B you mean “e.g.”, not “i.e.”. (And now I’ve confused myself on whether to include that last period.)

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »