Siri doesn’t need your love (sorry). But she does need your trust. At least, she does if you’re going to use her in the way Apple intends. For us to make artificially intelligent technologies like smartphones and self-driving cars a part our routines, we have to be willing to turn over important parts of our lives to them—like our calendars, or our actual lives. Now a study suggests that having a voice and a name is all it takes for a computer to gain that trust.
Adam Waytz, a psychologist at Northwestern University’s Kellogg School of Management, tested people’s trust of technology using a driving simulator named Iris. (Yes, as in “Siri” backward. It was, Waytz says, “the idea of my much cleverer co-author Nick Epley” at the University of Chicago.) Iris’s voice was provided by their colleague Heather Caruso, who happens to do a good computer impression. But, Waytz adds, “I would be interested in testing whether male versus female voices might produce different effects.”
Waytz recruited 100 adults to take the driving simulator for a spin. One group of them used the simulator on a normal setting, driving it just like a real car. For another group, the simulator had autonomous (self-driving) features that they could turn on by pressing buttons on the steering wheel. The final group of subjects was introduced to the simulator as “Iris.” Its self-driving features were the same as for the second group of subjects, but now Iris spoke to subjects directly, giving explanations that had come from an experimenter in the other group. (“Hello, I’m Iris. I can control the gas, brakes, and steering.”)
After taking a test spin to familiarize themselves, subjects drove the simulators through two six-minute sessions. At the end of the first session, they filled out a questionnaire about how much they liked the vehicle, how anthropomorphic or person-like it was (“How smart does this car seem? How well do you think this car could plan the best route available?”), and how much they trusted it.
That trust was then challenged in the second session, when everyone was forced into an accident that wasn’t their fault (being sideswiped by another car on the highway). Subjects answered more questions after the accident. Heart rate monitors also collected data while people were driving, and the researchers analyzed video of each subject afterward to see how startled they were in the collision.
Unsurprisingly, subjects rated “Iris” as more anthropomorphic than the unnamed and voiceless self-driving car—which was still seen as more anthropomorphic than a regular car. They also liked both autonomous cars better than the normal car. And their questionnaire answers, combined with their more relaxed body language on video and via the heart rate monitor, showed that they trusted Iris more than either other car.
“We might come up with all sorts of nervous thoughts” when we imagine taking a self-driving car on the road, Waytz says. “But when people were actually in the car, they felt relaxed because the responsibilities of driving were no longer up to them.” He thinks Iris put people at ease more than the voiceless car because “being in the autonomous car that has a personified voice, name, and gender feels like having a competent pilot, capable of steering you in the right direction.” (Though the simulator was very realistic, Waytz notes, it’s possible people would feel different on a real road.)
When they got in an accident, people in the normal cars didn’t blame their vehicles. People in autonomous cars did assign some blame to the car. But Iris received less blame than the un-personified autonomous car did.
“[By] personifying the car,” Waytz says, “we were able to get people in our study to trust the car to a greater degree than when we stripped the car of these anthropomorphic features.” He thinks this knowledge can help the designers of self-driving cars make them more appealing. “I think the most important thing is to match the features of a car with the driver’s expectations,” he says. “When people start using autonomous cars regularly, they will expect some degree of competence and intelligence from the car.” Some personality, you might say.
Giving a computer a name and a voice can apparently trick us into trusting it more. But Waytz doesn’t find anything sinister in that. “We cannot stop the future of technology [from] becoming more and more humanlike,” he says. “The only thing to do is to increase people’s degree of comfort with humanized technology.” Just maybe not so much that we’re professing our love to it.
Image: by Jeffrey Putman (via Flickr)
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle Journal of Experimental Social Psychology, 52, 113-117 DOI: 10.1016/j.jesp.2014.01.005