At night in the rivers of the Amazon Basin there buzzes an entire electric civilization of fish that “see” and communicate by discharging weak electric fields. These odd characters, swimming batteries which go by the name of “weakly electric fish,” have been the focus of research in my lab and those of many others for quite a while now, because they are a model system for understanding how the brain works. (While their brains are a bit different, we can learn a great deal about ours from them, just as we’ve learned much of what we know about genetics from fruit flies.) There are now well over 3,000 scientific papers on how the brains of these fish work.
Recently, my collaborators and I built a robotic version of these animals, focusing on one in particular: the black ghost knifefish. (The name is apparently derived from a native South American belief that the souls of ancestors inhabit these fish. For the sake of my karmic health, I’m hoping that this is apocryphal.) My university, Northwestern, did a press release with a video about our “GhostBot” last week, and I’ve been astonished at its popularity (nearly 30,000 views as I write this, thanks to coverage by places like io9, Fast Company, PC World, and msnbc). Given this unexpected interest, I thought I’d post a bit of the story behind the ghost.
I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.
Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.”
Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More
Here’s the extended version of our interview with director Joe Kosinski from the December issue of DISCOVER, in which the first-time feature film director talks about reinventing the light cycle, building suits with on-board power, and how time passes in Tron compared to the real world.
Why return to Tron, and why now?
The original Tron was conceptually so far ahead of its time with this notion of a digital version of yourself in cyberspace. I think people had a hard time relating to in the early 1980s. We’ve caught up to that idea—today it’s kind of second nature.
Visually, Tron it was like nothing else I’d ever seen before: Completely unique. Nothing else looked like it before, and nothing else has looked like it since—you know, hopefully until our movie comes out.
How did you think about representing digital space as a physical place?
Where the first movie tried to use real-world materials to look at digital as possible, my approach has been the opposite: to create a world that felt real and visceral. The world of Tron has evolved [since it’s been] sitting isolated, disconnected from the Internet for the last 28 years. And in that time, it had evolved into a world where the simulation has become so realistic that it feels like we took motion picture cameras into this world and shot the thing for real. It has the style and the look of Tron, but it’s executed in a way that you can’t tell what’s real and what’s virtual. I built as many sets as I could. We built physically illuminated suits. The thing I’m most proud of is actually creating a fully digital character, who’s one of the main characters in our movie.
What did you keep from Tron, and what evolved?
Independence Day has one of my most favorite hero duos of all time: Will Smith and Jeff Goldblum. Brawn and brains, flyboy and nerd, working together to take out the baddies. It all comes down to one flash of insight on behalf of a drunk Goldblum after being chastised by his father. Cliché eureka! moments like Goldblum’s realization that he can give the mothership a “cold” are great until you realize one thing: if Goldblum hadn’t been as smart as he was, the movie would have ended much differently. No one in the film was even close to figuring out how to defeat the aliens. Will Smith was in a distant second place and he had only discovered that they are vulnerable to face punches. The hillbilly who flew his jet fighter into the alien destruct-o-beam doesn’t count, because he needed a force-field-free spaceship for his trick to work. If Jeff Goldblum hadn’t been a super-genius, humanity would have been annihilated.
Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.
Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.
WBEZ, the Chicago affiliate of National Public Radio, recently gathered together several of my fellow science and engineering researchers at Northwestern University to talk about the science of science fiction films. The panel, and just short of 500 people from the community and university, watched clips from Star Wars, Gattaca, Minority Report, Eternal Sunshine of the Spotless Mind, and The Matrix. I was the robot/AI guy commenting on the robot spiders of Minority Report; Todd Kuiken, a designer of neuroprosthetic limbs, commented on Luke getting a new arm in Star Wars: The Empire Strikes Back; Tom Meade, a developer of medical biosensors and new medical imaging techniques, commented on Gattaca; and Catherine Wooley, who studies memory, commented on Eternal Sunshine.
The full audio of the event can be streamed or downloaded from here.
CLARICE: Zoe Graystone was Lacy’s best friend. A real tragedy for all of us. She was very special. I mean, she was brilliant.
NESTOR: At computer stuff, right? That’s my major. Did you know that there are bits of software that you use every day that were written decades ago?
LACY: Is that true? Oh, that’s amazing.
NESTOR: Yeah. You write a great program, and, you know, it can outlive you. It’s like a work of art, you know? Maybe Zoe was an artist. Maybe her work… Will live on.
From: Rebirth, Season 1.0 of Caprica
I’m excited that today Caprica is back on the air for the second half of its first season. As the show’s science advisor, I thought I’d pay homage to its reentry into our living rooms with some thoughts about how the show is dealing with the clash between the mortality of its living characters and the immortality of its virtual characters.
As part of DISCOVER’s 30th anniversary celebration, the magazine invited 11 eminent scientists to look forward and share their predictions and hopes for the next three decades. But we also want to turn this over to Science Not Fiction’s readers: How do you think science will improve the world by 2040?
Below are short excerpts of the guest scientists’ responses, with links to the full versions:
In a recent article, Search for Extraterrestrial Intelligence (SETI) astronomer Seth Shostak makes an intriguing claim: SETI should start pointing its telescopes toward corners of the known universe that would be friendly not just to intelligent aliens but to artificial alien intelligence. The basis of his suggestion is that any form of life intelligent enough to generate the kinds of radio signals that SETI is looking for would be “quickly” superseded by an artificial intelligence of their creation. Here, going on our own rate of progress toward AI, Shostak suggests that this radio-to-AI delay is a small handful of centuries.
These artificial intelligences, not likely to have had the “nostalgia module” installed, may quickly flee the home planet like a teenager trying to pretend it isn’t related to its parents. If nothing else, they will likely need to do this to find further resources such as materials and energy. Where would they want to go? Shostak speculates they may go to places where large amounts of energy can be obtained, such as near large stars or black holes.
Stephen Hawking imagines aliens covering stars with mirrors
to generate enough power for worm holes
Stephen Hawking has suggested one reason to go to high-energy regions would be to make worm holes through space-time to travel vast distances quickly. These areas are not hospitable to life as we know it, and so are not currently the target of SETI’s telescopes searching for signals of such life.
While it’s clear that we have a lot going for ourselves right out of the womb, it’s equally clear that one of our most admirable qualities is that we rapidly “get it” – we learn languages, skills for manipulating objects, hip hop dance moves, recipes for coconut mojitos, and how to charm people into liking us (ideally, in that order). Rather than experiential learning like this, early AI work focused on sophisticated reasoning problems. The touchstone for these efforts was Alan Turing’s original effort to mimic the reasoning processes of mathematicians engaged in solving a math problem – an effort that gave us many great things, particularly a distillation of what it means for something to be computable that stands as one of the great intellectual accomplishments of the twentieth century. That form of AI, while successful in particular domains — chess playing and expert systems, for example — has been less successful in solving problems of ongoing embodied activity, such as the aforementioned coconut mojito making. What if, instead of mimicking a mathematician trying to solve a math problem, Alan Turing had decided to mimic a scientist trying to determine the validity of a hypothesis? According to some developmental psychologists, in doing so we’d actually be emulating the reasoning processes of an infant, and thus, potentially, we’d be unlocking the great power of experiential learning.
Having robots with minds implementing the scientific process rather than math problem solving is essentially what’s happening in a few corners of robotics, most recently with the Xpero project, an effort to develop an embodied cognitive system that learns about its world much like an infant would. It’s one of a host of robo-infants being worked on (here’s a nice overview graphic). This approach has led to some very impressive achievements including an “evil starfish” robot that can quickly learn how to control its body after several of its “limbs” have been chopped off.
Engineer, inventor, and Singularity true-believer Ray Kurzweil thinks we can reverse-engineer the brain in a couple decades. After Gizmodo mis-reported Kurzweil’s Singularity Summit prediction that we’d reverse-engineer the brain by 2020 (he predicted 2030), the blogosphere caught fire. PZ Myers’ trademark incendiary arguments kick-started the debate when he described Kurzweil as the “Deepak Chopra for the computer science cognoscenti.” Of course, Kurzweil responded, to which Myers retorted. Hardly a new topic, the Singularity has already taken some healthy blows from Jaron Lanier, John Pavlus and John Horgan. The fundamental failure of Kurzweil’s argument is summarized by Myers:
My complaint isn’t that he has set a date by which we’ll understand the brain, but that he has provided no baseline value for his exponential growth claim, and has no way to measure how much we know now, how much we need to know, and how rapidly we will acquire that knowledge.