Category: The Singularity

The Geek Rapture and Other Musings of William Gibson

By Malcolm MacIver | October 17, 2011 1:02 am

Earlier today I saw a conversation with William Gibson, the inaugural event of this year’s Chicago Humanities Festival. It took place on the set of an ongoing play on Northwestern University’s campus, mostly cleared off for the event save for two pay phones. This reminder of our technological past joined forces with persistent microphone problems to provide an odd dys-technological backdrop to a conversation about the way our lives are changing under the tremendous force of technological change.

Some of Gibson’s most fascinating comments were about how our era would be thought about by people in the far future. If the Victorians are known for their denial of the reality of sex, Gibson said, we will be known for our odd fixation with distinguishing real from virtual reality. This comment resonated with me on many different levels. Just a couple weeks before, I had lunch with Craig Mundie, the head of Microsoft Research, prior to a talk he gave at Northwestern. He told us about some new directions they are taking one of their hottest products, the Kinect. The Kinect is a camera for the Xbox gaming system that can see things in 3D. One of their new endeavors with this camera is to allow you to create 3D avatars that move and talk as you are in real time, so you can have very realistic virtual meet-ups. This is now available on the Xbox as Avatar Kinect. The second direction is the real time generation of 3D models of the world around you as you sweep the Kinect around by hand, called Kinect Fusion. With this model of the world around you, you can start to meld real and virtual in some very fun ways. In one of his demos, Mundie waved a Kinect around a clay vase on a nearby table. We instantly got an accurate 3D model up on the screen – exciting and impressive from a $150 gizmo. I’ve had to create 3D models of stuff in my own research, and that’s involved hardware about 100 times more expensive. Even more impressive, Mundie next had the projected image of the 3D model of the vase start to spin, then stuck his hands out in front of the Kinect and used movements of his hand to sculpt it, potter-like. It was wild. All that was needed to complete the trip was a quick 3D print of the result. Further demos showed other ways in which the line between reality and virtuality was being blurred, and it all brought me back to the confluence of real and virtual worlds so well envisioned by the show I advised during its brief life, Caprica.

Gibson’s right. We haven’t yet moved beyond our need to identify what belongs to what when it comes to digital and physical worlds, so we constantly consecrate it with our language. Ironically, some of that very language was created by him: “cyberspace,” a word Gibson coined in his story “Burning Chrome” in 1982. During the conversation today, led by fellow faculty member and author Bill Savage, Gibson said he’s less interested in its rise than to see it die out. He sees its use as a hallmark of our distancing ourselves from who we are as mediated by computer technology. He thinks the term is starting to go out of use, and he’s happy about that — in his view, there’s no need for a word about a space that we are constantly moving through the coordinates of, as we do each time we go on to twitter, facebook, google+, and other digital extensions of self. It’s not cyberspace anymore: it’s our space.

It seemed inevitable that a question about The Singularity would be put to Gibson in the Q&A. Sure enough, it was the final note, and Gibson dispatched it with typical incisiveness. The Singularity, he said, is the Geek Rapture. The world will not change in that way. Like our gradual entrance into cyberspace, now complete enough that marking this world with a separate term seems quaint, Gibson said we will eventually find ourselves sitting on the other side of a whole bunch of cool hardware. But, he feels our belief that it will be a sudden, quasi-religious transformation (perhaps with Cylon guns blazing?) is positively 4th century in its thinking.

The AI Singularity is Dead; Long Live the Cybernetic Singularity

By Kyle Munkittrick | June 25, 2011 9:45 am

The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.

I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.

This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:

1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.

2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.

3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.

In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence. Read More

Transhumanism: A Secular Sandbox for Exploring the Afterlife?

By Malcolm MacIver | February 28, 2011 1:35 am

I am a scientist and academic by day, but by night I’m increasingly called upon to talk about transhumanism and the Singularity. Last year, I was science advisor to Caprica, a show that explored relationships between uploaded digital selves and real selves. Some months ago I participated in a public panel on “Mutants, Androids, and Cyborgs: The science of pop culture films” for Chicago’s NPR affiliate, WBEZ.  This week brings a panel at the Director’s Guild of America in Los Angeles, entitled “The Science of Cyborgs” on interfacing machines to living nervous systems.

The latest panel to be added to my list is a discussion about the first transhumanist opera, Tod Machover’s “Death and the Powers.” The opera is about an inventor and businessman, Simon Powers, who is approaching the end of his life. He decides to create a device (called The System) that he can upload himself into (hmm I wonder who this might be based on?). After Act 2, the entire set, including a host of OperaBots and a musical chandelier (created at the MIT Media Lab), become the physical manifestation of the now incorporeal Simon Powers, who’s singing we still hear but who has disappeared from the stage. Much of the opera is exploring how his relationships with his daughter and mother change post-uploading. His daughter and wife ask whether The System is really him. They wonder if they should follow his pleas to join him, and whether life will still be meaningful without death. The libretto, by the renown Robert Pinsky, renders these questions in beautiful poetry. It will open in Chicago in April.

These experiences have been fascinating. But I can’t help wondering, what’s with all the sudden interest in transhumanism and the singularity? Read More

Does AI Need Guts to Get to the Singularity?

By Malcolm MacIver | February 2, 2011 9:28 pm

We all have our favorite capacity/organ that we fail modern-day AI for not having, and that we think it needs to have to get truly intelligent machines. For some it’s consciousness, for others it is common sense, emotion, heart, or soul. What if it came down to a gut? That we need to make our AI have the capacity to get hungry, and slake that hunger with food, for the next real breakthrough? There’s some new information on the role of gut microbes in brain development that’s worth some mental mastication in this regard (PNAS via PhysOrg).

Read More

Why I'm Not Afraid of the Singularity

By Kyle Munkittrick | January 20, 2011 2:27 pm

the screens, THE SCREENS THEY BECKON TO ME

I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.

Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.

….

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More

The Undesigned Brain is Hard to Copy

By Kyle Munkittrick | January 17, 2011 10:47 am


UPDATE: Hanson has responded and Lee has rebutted. My reaction after the jump.

The Singularity seems to be getting less and less near. One of the big goals of Singularity hopefuls is to be able to put a human mind onto (into? not sure on the proper preposition here) a non-biological substrate. Most of the debates have revolved around computer analogies. The brain is hardware, the mind is software. Therefore, to run the mind on different hardware, it just has to be “ported” or “emulated” the way a computer program might be. Timothy B. Lee (not the internet inventing one) counters Robin Hanson’s claim that we will be able to upload a human mind onto a computer within the next couple decades by dissecting the computer=mind analogy:

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

In short: we know how software is written, we can see the code and rules that govern the system–not true for the mind, so we guess at the unknowns and test the guesses with simulations. Lee’s post is very much worth the full read, so give it a perusal.

Lee got me thinking with his point that “natural systems don’t have designers.” Evolutionary processes have resulted in the brain we have today, but there was no intention or design behind those process. Our minds are undesigned.

I find that fascinating. In the first place, because it means that simulation will be exceedingly difficult. How do you reverse-engineer something with no engineer? Second, even if a simulation is successful, it by no means a guarantees that we can change the substrate of an existing mind. If the mind is an emergent property of the physical brain, then one can no more move a mind than one could move a hurricane from one system to another. The mind, it may turn out, is fundamentally and essentially related to the substrate in which it is embodied. Read More

We Need Gattaca to Prevent Skynet and Global Warming

By Kyle Munkittrick | November 10, 2010 6:54 pm

If only they'd kept Jimmy Carter's solar panels on there, this whole thing could have been avoided.

Independence Day has one of my most favorite hero duos of all time: Will Smith and Jeff Goldblum. Brawn and brains, flyboy and nerd, working together to take out the baddies. It all comes down to one flash of insight on behalf of a drunk Goldblum after being chastised by his father. Cliché eureka! moments like Goldblum’s realization that he can give the mothership a “cold” are great until you realize one thing: if Goldblum hadn’t been as smart as he was, the movie would have ended much differently. No one in the film was even close to figuring out how to defeat the aliens. Will Smith was in a distant second place and he had only discovered that they are vulnerable to face punches. The hillbilly who flew his jet fighter into the alien destruct-o-beam doesn’t count, because he needed a force-field-free spaceship for his trick to work. If Jeff Goldblum hadn’t been a super-genius, humanity would have been annihilated.

Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.

Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.

Read More

Defending the World's Most Dangerous Idea

By Kyle Munkittrick | September 24, 2010 11:16 am

I wish every room of my life was lit with bomblights

I had hoped for a good response to “The Most Dangerous Idea in the World,” but I must admit I did not expect the slew of comments, responses, and the huge Reddit thread that it triggered. You critiqued my stance on religion, on economic equality, on the value of suffering and death, on the benefits of technology, and on the “you support eugenics? what!?” level.  The value of any idea is how well it stands up to public scrutiny and debate. So allow me to put up my rhetorical dukes and see if I can’t land a few haymakers on your many counterpoints.

There were five big counterpoints to transhumanism that emerged from the comments. For the sake of clarity and brevity, I have paraphrased each.

1. Transhumanism is new-age, techno-utopian, “Rapture of the Nerds” pap.

2. Transhumanism will split society between rich transhumans and poor normals.

3. Without death, there will be overpopulation, insufficient resources, we’ll all get bored and bad old people will never go away.

4. Eugenics is bad. Period.

5. What if I don’t want to be transhuman?

And now, my answers:

Read More

MORE ABOUT: Death

Can We Really Reverse-Engineer the Brain by 2030?

By Kyle Munkittrick | August 24, 2010 12:47 pm

Brainsplosion!Engineer, inventor, and Singularity true-believer Ray Kurzweil thinks we can reverse-engineer the brain in a couple decades. After Gizmodo mis-reported Kurzweil’s Singularity Summit prediction that we’d reverse-engineer the brain by 2020 (he predicted 2030), the blogosphere caught fire. PZ Myers’ trademark incendiary arguments kick-started the debate when he described Kurzweil as the “Deepak Chopra for the computer science cognoscenti.” Of course, Kurzweil responded, to which Myers retorted. Hardly a new topic, the Singularity has already taken some healthy blows from Jaron Lanier, John Pavlus and John Horgan. The fundamental failure of Kurzweil’s argument is summarized by Myers:

My complaint isn’t that he has set a date by which we’ll understand the brain, but that he has provided no baseline value for his exponential growth claim, and has no way to measure how much we know now, how much we need to know, and how rapidly we will acquire that knowledge.

Read More

A Funny Thing Happened On The Way To Post-Humanity

By Stephen Cass | February 9, 2009 5:25 pm

Cover of trade release of TranshumanThe future belongs to the post-human, suggests an increasing number of science-fiction writers and serious futurologists (in some cases, they are one and the same person). Post-humanity arises when people and machines merge to create sentient individuals that have capabilities (and possibly motivations) that are so far beyond our current scope as to represent a new stage in human evolution. Immortality and the ability to exist entirely as software within a computer network are only two of the more pedestrian possibilities that may be open to the post-human.

Read More

CATEGORIZED UNDER: Biotech, Comics, The Singularity
NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »