We Trust Technology That Talks to Us

By Elizabeth Preston | April 15, 2014 9:11 am

siri love

Siri doesn’t need your love (sorry). But she does need your trust. At least, she does if you’re going to use her in the way Apple intends. For us to make artificially intelligent technologies like smartphones and self-driving cars a part our routines, we have to be willing to turn over important parts of our lives to them—like our calendars, or our actual lives. Now a study suggests that having a voice and a name is all it takes for a computer to gain that trust.

Adam Waytz, a psychologist at Northwestern University’s Kellogg School of Management, tested people’s trust of technology using a driving simulator named Iris. (Yes, as in “Siri” backward. It was, Waytz says, “the idea of my much cleverer co-author Nick Epley” at the University of Chicago.) Iris’s voice was provided by their colleague Heather Caruso, who happens to do a good computer impression. But, Waytz adds, “I would be interested in testing whether male versus female voices might produce different effects.”

Waytz recruited 100 adults to take the driving simulator for a spin. One group of them used the simulator on a normal setting, driving it just like a real car. For another group, the simulator had autonomous (self-driving) features that they could turn on by pressing buttons on the steering wheel. The final group of subjects was introduced to the simulator as “Iris.” Its self-driving features were the same as for the second group of subjects, but now Iris spoke to subjects directly, giving explanations that had come from an experimenter in the other group. (“Hello, I’m Iris. I can control the gas, brakes, and steering.”)

After taking a test spin to familiarize themselves, subjects drove the simulators through two six-minute sessions. At the end of the first session, they filled out a questionnaire about how much they liked the vehicle, how anthropomorphic or person-like it was (“How smart does this car seem? How well do you think this car could plan the best route available?”), and how much they trusted it.

That trust was then challenged in the second session, when everyone was forced into an accident that wasn’t their fault (being sideswiped by another car on the highway). Subjects answered more questions after the accident. Heart rate monitors also collected data while people were driving, and the researchers analyzed video of each subject afterward to see how startled they were in the collision.

Unsurprisingly, subjects rated “Iris” as more anthropomorphic than the unnamed and voiceless self-driving car—which was still seen as more anthropomorphic than a regular car. They also liked both autonomous cars better than the normal car. And their questionnaire answers, combined with their more relaxed body language on video and via the heart rate monitor, showed that they trusted Iris more than either other car.

“We might come up with all sorts of nervous thoughts” when we imagine taking a self-driving car on the road, Waytz says. “But when people were actually in the car, they felt relaxed because the responsibilities of driving were no longer up to them.” He thinks Iris put people at ease more than the voiceless car because “being in the autonomous car that has a personified voice, name, and gender feels like having a competent pilot, capable of steering you in the right direction.” (Though the simulator was very realistic, Waytz notes, it’s possible people would feel different on a real road.)

When they got in an accident, people in the normal cars didn’t blame their vehicles. People in autonomous cars did assign some blame to the car. But Iris received less blame than the un-personified autonomous car did.

“[By] personifying the car,” Waytz says, “we were able to get people in our study to trust the car to a greater degree than when we stripped the car of these anthropomorphic features.” He thinks this knowledge can help the designers of self-driving cars make them more appealing. “I think the most important thing is to match the features of a car with the driver’s expectations,” he says. “When people start using autonomous cars regularly, they will expect some degree of competence and intelligence from the car.” Some personality, you might say.

Giving a computer a name and a voice can apparently trick us into trusting it more. But Waytz doesn’t find anything sinister in that. “We cannot stop the future of technology [from] becoming more and more humanlike,” he says. “The only thing to do is to increase people’s degree of comfort with humanized technology.” Just maybe not so much that we’re professing our love to it.

Image: by Jeffrey Putman (via Flickr)

Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle Journal of Experimental Social Psychology, 52, 113-117 DOI: 10.1016/j.jesp.2014.01.005

  • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

    Who remains trusting of Cylons, Daleks, and POTUS?

  • guest

    While it’s wonderful that we continue to improve and refine technology, people should still be aware that it is not perfect or 100% fail proof, even if it can be equipped with a human voice- just look at the people who have car accidents or drive into a body of water because they trusted their gps so much, and the gps told them to go there so they drove into something, despite assumably being able to see with their own eyes that they would have an accident or land in a lake!

  • Abigail Pine

    If all technology can be hacked, then what would you do if ‘Iris’ was hacked? She could be reprogrammed to give you false directions(assuming that she has a built in GPS, and because she controls the steering, could lead you off the road. While I think that this is a big step up to better technology, I think that there are many things that could go wrong.



Like the wily and many-armed cephalopod, Inkfish reaches into the far corners of science news and brings you back surprises (and the occasional sea creature). The ink is virtual but the research is real.

About Elizabeth Preston

Elizabeth Preston is a science writer whose articles have appeared in publications including Slate, Nautilus, and National Geographic. She's also the former editor of the children's science magazine Muse, where she still writes in the voice of a know-it-all bovine. She lives in Massachusetts. Read more and see her other writing here.


See More

@Inkfish on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar