Category: Computers

The Geek Rapture and Other Musings of William Gibson

By Malcolm MacIver | October 17, 2011 1:02 am

Earlier today I saw a conversation with William Gibson, the inaugural event of this year’s Chicago Humanities Festival. It took place on the set of an ongoing play on Northwestern University’s campus, mostly cleared off for the event save for two pay phones. This reminder of our technological past joined forces with persistent microphone problems to provide an odd dys-technological backdrop to a conversation about the way our lives are changing under the tremendous force of technological change.

Some of Gibson’s most fascinating comments were about how our era would be thought about by people in the far future. If the Victorians are known for their denial of the reality of sex, Gibson said, we will be known for our odd fixation with distinguishing real from virtual reality. This comment resonated with me on many different levels. Just a couple weeks before, I had lunch with Craig Mundie, the head of Microsoft Research, prior to a talk he gave at Northwestern. He told us about some new directions they are taking one of their hottest products, the Kinect. The Kinect is a camera for the Xbox gaming system that can see things in 3D. One of their new endeavors with this camera is to allow you to create 3D avatars that move and talk as you are in real time, so you can have very realistic virtual meet-ups. This is now available on the Xbox as Avatar Kinect. The second direction is the real time generation of 3D models of the world around you as you sweep the Kinect around by hand, called Kinect Fusion. With this model of the world around you, you can start to meld real and virtual in some very fun ways. In one of his demos, Mundie waved a Kinect around a clay vase on a nearby table. We instantly got an accurate 3D model up on the screen – exciting and impressive from a $150 gizmo. I’ve had to create 3D models of stuff in my own research, and that’s involved hardware about 100 times more expensive. Even more impressive, Mundie next had the projected image of the 3D model of the vase start to spin, then stuck his hands out in front of the Kinect and used movements of his hand to sculpt it, potter-like. It was wild. All that was needed to complete the trip was a quick 3D print of the result. Further demos showed other ways in which the line between reality and virtuality was being blurred, and it all brought me back to the confluence of real and virtual worlds so well envisioned by the show I advised during its brief life, Caprica.

Gibson’s right. We haven’t yet moved beyond our need to identify what belongs to what when it comes to digital and physical worlds, so we constantly consecrate it with our language. Ironically, some of that very language was created by him: “cyberspace,” a word Gibson coined in his story “Burning Chrome” in 1982. During the conversation today, led by fellow faculty member and author Bill Savage, Gibson said he’s less interested in its rise than to see it die out. He sees its use as a hallmark of our distancing ourselves from who we are as mediated by computer technology. He thinks the term is starting to go out of use, and he’s happy about that — in his view, there’s no need for a word about a space that we are constantly moving through the coordinates of, as we do each time we go on to twitter, facebook, google+, and other digital extensions of self. It’s not cyberspace anymore: it’s our space.

It seemed inevitable that a question about The Singularity would be put to Gibson in the Q&A. Sure enough, it was the final note, and Gibson dispatched it with typical incisiveness. The Singularity, he said, is the Geek Rapture. The world will not change in that way. Like our gradual entrance into cyberspace, now complete enough that marking this world with a separate term seems quaint, Gibson said we will eventually find ourselves sitting on the other side of a whole bunch of cool hardware. But, he feels our belief that it will be a sudden, quasi-religious transformation (perhaps with Cylon guns blazing?) is positively 4th century in its thinking.

When Will We Be Transhuman? Seven Conditions for Attaining Transhumanism

By Kyle Munkittrick | July 16, 2011 9:53 am

The future is impossible to predict. But that’s not going to stop people from trying. We can at least pretend to know where it is we want humanity to go. We hope that laws we craft, the technologies we invent, our social habits and our ways of thinking are small forces that, when combined over time, move our species towards a better existence. The question is, How will we know if we are making progress?

As a movement philosophy, transhumanism and its proponents argue for a future of ageless bodies, transcendent experiences, and extraordinary minds. Not everyone supports every aspect of transhumanism, but you’d be amazed at how neatly current political struggles and technological progress point toward a transhuman future. Transhumanism isn’t just about cybernetics and robot bodies. Social and political progress must accompany the technological and biological advances for transhumanism to become a reality.

But how will we able to tell when the pieces finally do fall into place? I’ve been trying to answer that question ever since Tyler Cowen at Marginal Revolution was asked a while back by his readers: What are the exact conditions for counting “transhumanism” as having been attained? In an attempt to answer, I responded with what I saw as the three key indicators:

  1. Medical modifications that permanently alter or replace a function of the human body become prolific.
  2. Our social understanding of aging loses the “virtue of necessity” aspect and society begins to treat aging as a disease.
  3. Rights discourse would shift from who we include among humans (i.e. should homosexual have marriage rights?) to a system flexible enough to easily bring in sentient non-humans.

As I groped through the intellectual dark for these three points, it became clear that the precise technology and how it worked was unimportant. Instead, we need to figure out how technology may change our lives and our ways of living. Unlike the infamous jetpack, which defined the failed futurama of the 20th century, the 21st needs broader progress markers. Here are seven things to look for in the coming centuries that will let us know if transhumanism is here. Read More

The AI Singularity is Dead; Long Live the Cybernetic Singularity

By Kyle Munkittrick | June 25, 2011 9:45 am

The nerd echo chamber is reverberating this week with the furious debate over Charlie Stross’ doubts about the possibility of an artificial “human-level intelligence” explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI’s more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I’ve come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate.

I’ve already made my case for why I’m not too concerned, but it’s always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross, who knows a thing or two about the Singularity and futuristic speculation. It’s the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled “Three arguments against the singularity” that “In short: Santa Clause doesn’t exist.

This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs “intelligently”. But it will be the intelligence of the serving hand rather than the commanding brain, and we’re only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there’ll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick’s experience machine thought experiment for real, I’m not sure we’d be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.

I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross’ comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel. In the other, we have the excellent Alex Knapp of Forbes’ Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson. I’ll spare you all the back and forth (and all of Goertzel’s infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows:

1. Stross’ point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence.

2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable.

3. Skeptic rebuttal: Hanson argues A) “Intelligence” is a nebulous catch-all like “betterness” that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov’s Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, “bootstrapping” intelligence explosions would not happen as Singularitarian’s foresee.

In essence, the debate is that “human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?” The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence. Read More

If Doctors Need Pit Crews, Tricorders Should Be Part of the Team

By Kyle Munkittrick | May 26, 2011 9:54 pm

Health care is broken. In the US quality of care is tanking. Even in countries with successful universal health care systems costs are rising too fast for the systems to cope. So what do we do?

Atul Gawande, who knows a thing or two about improving healthcare, argues in his commencement address to Harvard that doctors need pit crews:

We are at a cusp point in medical generations. The doctors of former generations lament what medicine has become. If they could start over, the surveys tell us, they wouldn’t choose the profession today. They recall a simpler past without insurance-company hassles, government regulations, malpractice litigation, not to mention nurses and doctors bearing tattoos and talking of wanting “balance” in their lives. These are not the cause of their unease, however. They are symptoms of a deeper condition—which is the reality that medicine’s complexity has exceeded our individual capabilities as doctors.

Gawande has two main arguments. First, that when doctors use checklists they prevent errors and quality of care goes way up. Second, that doctors need to stop acting like autonomous problem solvers and see themselves as a member of a tight-knit team. Gawande is one of the few sane voices in the health care debate. However, later on in his speech, he says that the solution to the health care conundrum is not technology. To a large degree, I agree with him. But not completely. Tech still has a big role to play. If we take a closer look at Dune and Star Trek, we’ll see why Qualcomm and the X-Prize Foundation are ponying up 10 million bucks to fund a piece of medical technology that could help make Gawande’s dream of team-based medicine a bit closer to becoming reality. Read More

Transhumanism: A Secular Sandbox for Exploring the Afterlife?

By Malcolm MacIver | February 28, 2011 1:35 am

I am a scientist and academic by day, but by night I’m increasingly called upon to talk about transhumanism and the Singularity. Last year, I was science advisor to Caprica, a show that explored relationships between uploaded digital selves and real selves. Some months ago I participated in a public panel on “Mutants, Androids, and Cyborgs: The science of pop culture films” for Chicago’s NPR affiliate, WBEZ.  This week brings a panel at the Director’s Guild of America in Los Angeles, entitled “The Science of Cyborgs” on interfacing machines to living nervous systems.

The latest panel to be added to my list is a discussion about the first transhumanist opera, Tod Machover’s “Death and the Powers.” The opera is about an inventor and businessman, Simon Powers, who is approaching the end of his life. He decides to create a device (called The System) that he can upload himself into (hmm I wonder who this might be based on?). After Act 2, the entire set, including a host of OperaBots and a musical chandelier (created at the MIT Media Lab), become the physical manifestation of the now incorporeal Simon Powers, who’s singing we still hear but who has disappeared from the stage. Much of the opera is exploring how his relationships with his daughter and mother change post-uploading. His daughter and wife ask whether The System is really him. They wonder if they should follow his pleas to join him, and whether life will still be meaningful without death. The libretto, by the renown Robert Pinsky, renders these questions in beautiful poetry. It will open in Chicago in April.

These experiences have been fascinating. But I can’t help wondering, what’s with all the sudden interest in transhumanism and the singularity? Read More

I'll Take "Corporate Stiffs on Cheesy Sets" for $200

By Malcolm MacIver | February 17, 2011 12:35 pm

Was it just me, or was their something faintly bizarre about yesterday’s historical ass whooping of man by machine? Maybe it was Brad Rutter’s increasingly frantic swaying as Watson took his lead and asked for yet another clue in its stilted, strangely mis-timed way. Perhaps it was the effect of the last corporate stiff of the event – in front of a stone wall backdrop that seemed a parody of cheesy corporate décor – telling us where Watson’s winnings will go, all while speaking with a monotone that would make Al Gore jealous. Or maybe it was Alex Trebek’s nonchalance after the historic event as he immediately turned his attention to pitching the next day’s all-teen tournament. Somehow I expected balloons and confetti to descend from the ceiling, maybe with the voice of Hal in the background—“I’m sorry Ken, but you were really improving from your performance yesterday. Would you mind taking out the garbage?” The most important intelligence test of machine versus man in decades sails by with hardly the rattle of a plastic fern.

Besides the very impressive technical achievement of Watson, IBM should be congratulated for managing to turn three episodes of Jeopardy! into a three-episode-long infomercial for their brand. We saw breathless executives tell us how Watson was a real game-changer for medicine, genomics, and spiky hairdos for avatars. We saw the lead engineers puzzling over mathematical squiggles written on staggered layers of sliding glass panels (something we’ve seen in an Intel commercial before when it was necessary for a visual joke to work, and so obviously useless for doing real work that it seems an insult to viewers in this context).

Read More

CATEGORIZED UNDER: Artificial Intelligence, Cyborgs, TV

Robots That Evolve Like Animals Are Tough and Smart—Like Animals

By Malcolm MacIver | February 14, 2011 6:33 pm

People who work in robotics prefer not to highlight a reality of our work: robots are not very reliable. They break, all the time. This applies to all research robots, which typically flake out just as you’re giving an important demo to a funding agency or someone you’re trying to impress. My fish robot is back in the shop, again, after a few of its very rigid and very thin fin rays broke. Industrial robots, such as those you see on car assembly lines, can only do better by operating in extremely predictable, structured environments, doing the same thing over and over again. Home robots? If you buy a Roomba, be prepared to adjust your floor plan so that it doesn’t get stuck.

What’s going on? The world is constantly throwing curveballs at robots that weren’t anticipated by the designers. In a novel approach to this problem, Josh Bongard has recently shown how we can use the principles of evolution to make a robot’s “nervous system”—I’ll call it the robot’s controller—robust against many kinds of change. This study was done using large amounts of computer simulation time (it would have taken 50–100 years on a single computer), running a program that can simulate the effects of real-world physics on robots.

What he showed is that if we force a robot’s controller to work across widely varying robot body shapes, the robot can learn faster, and be more resistant to knocks that might leave your home robot a smoking pile of motors and silicon. It’s a remarkable result, one that offers a compelling illustration of why intelligence, in the broad sense of adaptively coping with the world, is about more than just what’s above your shoulders. How did the study show it?

Read More

MORE ABOUT: embodiment, evolution

How to Be A More Human Human

By Kyle Munkittrick | February 11, 2011 12:31 pm

Brian Christian is an exemplar of the human species. In 2009, Christian participated in the annual Loebner Prize competition, which is based on Alan Turing’s eponymous test for determining if a computer is able to “think” like a human. Christian did not submit an A.I. he had programmed, but his own mind. Christian was a “confederate,” that is, one of the humans representing humanity in the competition. Five A.I. programs and five humans compete to be judged the most human:

During the competition, each of four judges will type a conversation with one of us for five minutes, then the other, and then will have 10 minutes to reflect and decide which one is the human. Judges will also rank all the contestants—this is used in part as a tiebreaking measure. The computer program receiving the most votes and highest ranking from the judges (regardless of whether it passes the Turing Test by fooling 30 percent of them) is awarded the title of the Most Human Computer.

What makes the competition so intriguing is that, as all contestants are ranked, be they human or computer, there is not only an award for the Most Human Computer, but also an award for the Most Human Human. Brian Christian is one of the vetted few humans who has earned the accolade. He describes his experience in the competition in his outstanding article “Mind vs. Machine” in The Atlantic. The article presents a snippet of what will surely be a wonderful book, The Most Human Human.

Like Sherry Turkle, Christian argues that machines are calling our humanity into stark relief. Yet he sees human-like computers not as automatons dragging us into banality, but as imperfect mirrors, reminding us of what makes us human by what they cannot reflect. I suspect it’s Christian’s double-life as a science journalist and poet that drew him to consider our dual-natured human brain:

Perhaps the fetishization of analytical thinking, and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps at last, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts,” is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and, not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo—in this way AI may even turn out to be a boon for animal rights.

Among many conclusions Christian draws is that to be more human you must be yourself. But this is no idle command. The process of being oneself is an active, conscious, and, in some cases, laborious task. Consider your average conversation at a cocktail party – safe topics, non-confrontational questions, scripted answers. Part of Christian’s message, it seems, is not that we should worry about a computer sounding human, but that we humans may make the task too easy. So go forth and be quirky, odd, unique, expressive, honest, clever, eccentric, and above all yourself; in a phrase, be more human.

Image of The Most Human Human via RandomHouse

The Turkle Test

By Kyle Munkittrick | February 6, 2011 9:24 am

Can you have an emotional connection with a robot? Sherry Turkle, Director of the MIT Initiative on Technology and Self, believes you certainly could. Whether or not you should is the question. People, especially children, project personalities and emotions on to rudimentary robots. As the Chronicle of Higher Education article on her shows, the result of believing a robot can feel is not always happy:

One day during Turkle’s study at MIT, Kismet malfunctioned. A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn’t like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss “the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child,” as Turkle describes in [her new book] Alone Together.

We want to believe our robots love us. Movies like Wall-E, The Iron Giant, Short Circuit and A.I. are all based on the simple idea that robots can develop deep emotional connections with humans. For fans of the Half-Life video game series, Dog, a large scrapheap monstrosity with a penchant for dismembering hostile aliens, is one of the most lovable and loyal characters in the game. Science fiction is packed with robots that endear themselves to us, such as Data from Star Trek, the replicants in Blade Runner, and Legion from Mass Effect. Heck, even R2-D2 and C-3PO seem endeared to one another. And Futurama has a warning for all of us.

Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.

On the one hand, we have empty emotional aping; on the other, faceless super-computers. What are we to do? Are we trapped between the options of the mindless bot with the simulated smile or the sterile super-mind calculating the cost of lives? Read More

Does AI Need Guts to Get to the Singularity?

By Malcolm MacIver | February 2, 2011 9:28 pm

We all have our favorite capacity/organ that we fail modern-day AI for not having, and that we think it needs to have to get truly intelligent machines. For some it’s consciousness, for others it is common sense, emotion, heart, or soul. What if it came down to a gut? That we need to make our AI have the capacity to get hungry, and slake that hunger with food, for the next real breakthrough? There’s some new information on the role of gut microbes in brain development that’s worth some mental mastication in this regard (PNAS via PhysOrg).

Read More

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »