Can you have an emotional connection with a robot? Sherry Turkle, Director of the MIT Initiative on Technology and Self, believes you certainly could. Whether or not you should is the question. People, especially children, project personalities and emotions on to rudimentary robots. As the Chronicle of Higher Education article on her shows, the result of believing a robot can feel is not always happy:
One day during Turkle’s study at MIT, Kismet malfunctioned. A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn’t like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss “the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child,” as Turkle describes in [her new book] Alone Together.
We want to believe our robots love us. Movies like Wall-E, The Iron Giant, Short Circuit and A.I. are all based on the simple idea that robots can develop deep emotional connections with humans. For fans of the Half-Life video game series, Dog, a large scrapheap monstrosity with a penchant for dismembering hostile aliens, is one of the most lovable and loyal characters in the game. Science fiction is packed with robots that endear themselves to us, such as Data from Star Trek, the replicants in Blade Runner, and Legion from Mass Effect. Heck, even R2-D2 and C-3PO seem endeared to one another. And Futurama has a warning for all of us.
Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.
On the one hand, we have empty emotional aping; on the other, faceless super-computers. What are we to do? Are we trapped between the options of the mindless bot with the simulated smile or the sterile super-mind calculating the cost of lives? Read More
We all have our favorite capacity/organ that we fail modern-day AI for not having, and that we think it needs to have to get truly intelligent machines. For some it’s consciousness, for others it is common sense, emotion, heart, or soul. What if it came down to a gut? That we need to make our AI have the capacity to get hungry, and slake that hunger with food, for the next real breakthrough? There’s some new information on the role of gut microbes in brain development that’s worth some mental mastication in this regard (PNAS via PhysOrg).
At night in the rivers of the Amazon Basin there buzzes an entire electric civilization of fish that “see” and communicate by discharging weak electric fields. These odd characters, swimming batteries which go by the name of “weakly electric fish,” have been the focus of research in my lab and those of many others for quite a while now, because they are a model system for understanding how the brain works. (While their brains are a bit different, we can learn a great deal about ours from them, just as we’ve learned much of what we know about genetics from fruit flies.) There are now well over 3,000 scientific papers on how the brains of these fish work.
Recently, my collaborators and I built a robotic version of these animals, focusing on one in particular: the black ghost knifefish. (The name is apparently derived from a native South American belief that the souls of ancestors inhabit these fish. For the sake of my karmic health, I’m hoping that this is apocryphal.) My university, Northwestern, did a press release with a video about our “GhostBot” last week, and I’ve been astonished at its popularity (nearly 30,000 views as I write this, thanks to coverage by places like io9, Fast Company, PC World, and msnbc). Given this unexpected interest, I thought I’d post a bit of the story behind the ghost.
I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON achieved sentience. I was sure The Second Renaissance from the Animatrix was a fairly accurate depiction of how things would go down. We’d make smart robots, we’d treat them poorly, they’d rebel and slaughter humanity. Now I’m not so sure. I have big, gloomy doubts about the Singularity.
Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post “Yes, The Singularity is the Single Biggest Threat to Humanity.”
Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn’t place my finger on the problem, until I re-read Anissimov’s post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Read More
The Singularity seems to be getting less and less near. One of the big goals of Singularity hopefuls is to be able to put a human mind onto (into? not sure on the proper preposition here) a non-biological substrate. Most of the debates have revolved around computer analogies. The brain is hardware, the mind is software. Therefore, to run the mind on different hardware, it just has to be “ported” or “emulated” the way a computer program might be. Timothy B. Lee (not the internet inventing one) counters Robin Hanson’s claim that we will be able to upload a human mind onto a computer within the next couple decades by dissecting the computer=mind analogy:
You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.
In short: we know how software is written, we can see the code and rules that govern the system–not true for the mind, so we guess at the unknowns and test the guesses with simulations. Lee’s post is very much worth the full read, so give it a perusal.
Lee got me thinking with his point that “natural systems don’t have designers.” Evolutionary processes have resulted in the brain we have today, but there was no intention or design behind those process. Our minds are undesigned.
I find that fascinating. In the first place, because it means that simulation will be exceedingly difficult. How do you reverse-engineer something with no engineer? Second, even if a simulation is successful, it by no means a guarantees that we can change the substrate of an existing mind. If the mind is an emergent property of the physical brain, then one can no more move a mind than one could move a hurricane from one system to another. The mind, it may turn out, is fundamentally and essentially related to the substrate in which it is embodied. Read More
I thought about closing out the year with news of the strawberry genome sequencing project, and dipping into the results from the cocoa genome sequencing project, while perhaps enjoying a rainbow form a solar-powered rainbow making machine. They all seemed cool and futuristic and almost certainly something we’d find in the land of science fiction.
But then, there it was: A Robot Christmas. Two weeks ago, the team at Robots Podcast put out a call for robotics labs to make holiday videos, and so far six different robotics labs have responded with videos of their machines singing or playing Christmas carols, decorating, and otherwise wishing us seasons greetings. Since I can’t be the only who wanted to know how our future overlords celebrate the holiday, I thought I’d share. Happy New Year everyone!
A Robotic Christmas, Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland
Here’s the extended version of our interview with director Joe Kosinski from the December issue of DISCOVER, in which the first-time feature film director talks about reinventing the light cycle, building suits with on-board power, and how time passes in Tron compared to the real world.
Why return to Tron, and why now?
The original Tron was conceptually so far ahead of its time with this notion of a digital version of yourself in cyberspace. I think people had a hard time relating to in the early 1980s. We’ve caught up to that idea—today it’s kind of second nature.
Visually, Tron it was like nothing else I’d ever seen before: Completely unique. Nothing else looked like it before, and nothing else has looked like it since—you know, hopefully until our movie comes out.
How did you think about representing digital space as a physical place?
Where the first movie tried to use real-world materials to look at digital as possible, my approach has been the opposite: to create a world that felt real and visceral. The world of Tron has evolved [since it’s been] sitting isolated, disconnected from the Internet for the last 28 years. And in that time, it had evolved into a world where the simulation has become so realistic that it feels like we took motion picture cameras into this world and shot the thing for real. It has the style and the look of Tron, but it’s executed in a way that you can’t tell what’s real and what’s virtual. I built as many sets as I could. We built physically illuminated suits. The thing I’m most proud of is actually creating a fully digital character, who’s one of the main characters in our movie.
What did you keep from Tron, and what evolved?
Without getting into the ethics of WikiLeak’s activities, I’m disturbed that Visa, MasterCard and PayPal have all seen fit to police the organization by refusing to act as a middleman for donations. The whole affair drives home how dependent we are on a few corporations to make e-commerce function, and how little those corporations guarantee us anything in the way of rights.
In the short term, we may be stuck, but in the longer term, quantum money could help solve the problems by providing a secure currency that can be used without resort to a broker.
Physicist Steve Wiesner first proposed the concept of quantum money in 1969. He realized that since quantum states can’t be copied, their existence opens the door to unforgeable money.
Heisenberg’s famous Uncertainty Principle says you can either measure the position of a particle or its momentum, but not both to unlimited accuracy. One consequence of the Uncertainty Principle is the so-called No-Cloning Theorem: there can be no “subatomic Xerox machine” that takes an unknown particle, and spits out two particles with exactly the same position and momentum as the original one (except, say, that one particle is two inches to the left). For if such a machine existed, then we could determine both the position and momentum of the original particle—by measuring the position of one “Xerox copy” and the momentum of the other copy. But that would violate the Uncertainty Principle.
…Besides an ordinary serial number, each dollar bill would contain (say) a few hundred photons, which the central bank “polarized” in random directions when it issued the bill. (Let’s leave the engineering details to later!) The bank, in a massive database, remembers the polarization of every photon on every bill ever issued. If you ever want to verify that a bill is genuine, you just take it to the bank”
Farming has long evaded true automation. Where manufacturers create controlled environments perfect for precisely attuned machines performing repetitive tasks, the messiness of biology has long made automating growing things extremely challenging. Robots didn’t have the precision to pick things growing at uncertain heights, they didn’t have the judgment to identify ripeness, and they weren’t smart enough to navigate fields or greenhouses of uncertain geometry.
Well, they used to not have those traits.
Earlier this week, the Japanese Agriculture and Food Research Organization presented its strawberry picking robot: A droid that rolls along a track through fields of strawberries, scan the strawberries through stereoscopic cameras and check their color, then pick them if their ripe. In this way it can whip through 247 acres in 300 hours, far faster than the typical rate of 247 acres in 500 hours using human pickers.
Independence Day has one of my most favorite hero duos of all time: Will Smith and Jeff Goldblum. Brawn and brains, flyboy and nerd, working together to take out the baddies. It all comes down to one flash of insight on behalf of a drunk Goldblum after being chastised by his father. Cliché eureka! moments like Goldblum’s realization that he can give the mothership a “cold” are great until you realize one thing: if Goldblum hadn’t been as smart as he was, the movie would have ended much differently. No one in the film was even close to figuring out how to defeat the aliens. Will Smith was in a distant second place and he had only discovered that they are vulnerable to face punches. The hillbilly who flew his jet fighter into the alien destruct-o-beam doesn’t count, because he needed a force-field-free spaceship for his trick to work. If Jeff Goldblum hadn’t been a super-genius, humanity would have been annihilated.
Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.
Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.