Can you have an emotional connection with a robot? Sherry Turkle, Director of the MIT Initiative on Technology and Self, believes you certainly could. Whether or not you should is the question. People, especially children, project personalities and emotions on to rudimentary robots. As the Chronicle of Higher Education article on her shows, the result of believing a robot can feel is not always happy:
One day during Turkle’s study at MIT, Kismet malfunctioned. A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn’t like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss “the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child,” as Turkle describes in [her new book] Alone Together.
We want to believe our robots love us. Movies like Wall-E, The Iron Giant, Short Circuit and A.I. are all based on the simple idea that robots can develop deep emotional connections with humans. For fans of the Half-Life video game series, Dog, a large scrapheap monstrosity with a penchant for dismembering hostile aliens, is one of the most lovable and loyal characters in the game. Science fiction is packed with robots that endear themselves to us, such as Data from Star Trek, the replicants in Blade Runner, and Legion from Mass Effect. Heck, even R2-D2 and C-3PO seem endeared to one another. And Futurama has a warning for all of us.
Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.
On the one hand, we have empty emotional aping; on the other, faceless super-computers. What are we to do? Are we trapped between the options of the mindless bot with the simulated smile or the sterile super-mind calculating the cost of lives?Turkle’s primary concern, and I believe it is a legitimate one, is communication and technology’s influence upon it. Return to one of my favorite pieces of technology, the smartphone (or app phone, as David Pogue describes it). How often have you found yourself uncontrollably pulling the damn thing out of your pocket to check it, regardless of the situation. I have been at parties with friends, at intimate dinners, at funerals, in meetings, and even presenting for class and had to fight the urge to see which piece of junk email set off vibrations in my pocket. Every spare moment it seems like I’m checking Facebook or Instagram or Twitter or email to see what decontextualized scrap of communication I can consume to slake my thirst for that simple thing communication is supposed to create – community.
And therein we find the crux of the matter. Our communication through a lot of technology is not communication at all. It’s one-way “shouting into the void.” Like the defecting Red October we send out one sonar ping, hoping for one ping, just one ping, in return. Kismet and Cog provide that ping, that perfect response, immediately and wordlessly. Every day it seems like there is an article on how to read body language, to understand the wordless communication that happens every day in every interaction. Turkle’s reaction to Kismet and Cog dovetails nicely with Haraway’s lovely line, “Our machines are disturbingly lively, and we ourselves are frighteningly inert.”
Kismet and Cog are designed to mimic genuine body language. As such, they represent a kind of complementary Turing test. The original test was to see if a human could distinguish between conversation with a human and an A.I. by chatting over a computer terminal. The Turkle test, as we might call it, would be the ability to form a genuine relationship with a human – to show concern, to mirror emotion, to recognize the other self. The Turing test tells us if an A.I. can think like a human, the Turkle test tells us if an A.I. can communicate like a human.
The first group of robots to be subjected to the Turkle test may be just around the corner:
“There are advantages to it not being a person—robots can be seen as not judgmental; people are not at risk of losing face to a robot,” [leader of Kismet project, Cynthia] Breazeal says. “People may be more honest and willing to disclose information to a robot that they might not want to tell their doctor for fear of sounding like a ‘bad’ patient. So robots working with other people can help the patient and the care staff.”
During her research, Turkle visited several nursing homes where resi dents had been given robot dolls, including Paro, a seal-shaped stuffed animal programmed to purr and move when it is held or talked to. In many cases, the seniors bonded with the dolls and privately shared their life stories with them.
“There are at least two ways of reading these case studies,” she writes. “You can see seniors chatting with robots, telling their stories, and feel positive. Or you can see people speaking to chimeras, showering affection into thin air, and feel that something is amiss.”
Some robotics enthusiasts argue that these sociable machines will soon mature, and that new models may one day be judged as better than humans for many tasks. After all, robots don’t suffer emotional breakdowns, oversleep, or commit crimes.
Hospice and end-of-life care requires a daily heroic effort. Though I do not deny there are some fantastic and wonderful caretakers, but it would require superhuman abilities to provide a listening, understanding ear to each and every person one is helping live day-to-day. Hospice and elderly care robots designed to pass the Turkle test might provide a solution. Yes, these robots would not really be listening or understanding in the same way a real human hospice worker would. Further, we would need to make sure we avoided the confusion of the little girl from the beginning, ensuring an understanding that yes, this was a robot and, yes, it might break down. But, if we did that, couldn’t hospice bots provide a huge service? Nursing home animals don’t understand a thing either, yet they provide a measurable medical benefit to patients, as well as a general improvement in mood and quality of life. Why shouldn’t we make robots that do that as well?
I end with a story. My grandfather talks to his cat, Mickey, constantly. Mickey is not the brightest of cats and has pretty much one emotion, which is a mix of hunger and disdain. My grandfather is quite aware that Mickey is neither aware of nor concerned with the fact that there is no more beer in the fridge or that the internet is infuriating. But my grandfather loves telling these thoughts to his cantankerous feline companion. He simply likes to express his thoughts vocally; Mickey is his living breathing diary – no response is expected, no worry needed for Mickey’s opinion or mounting frustration with the tedium of petty complaints and pedantic observations.
My family and I visit my grandfather and grandmother regularly. We love chatting it up, contributing our opinions, arguing, and generally communicating with one another in vigorous discussion. My grandfather always seems to get the first and last word, as is his wont. He is not lacking for human companionship by any stretch of the imagination.
But, Mickey is different. Mickey makes my grandfather happy because he listens. He is an un-judging ear and a comically loyal companion. Mickey would pass the Turkle test. Were I to pick the cat up, pop the hatch on his side, and reveal that Mickey is, in fact, a robo-cat, I don’t think my grandfather would give a damn. And that says something vital.