Apple’s secretive project aimed at testing self-driving cars seems worlds apart from the tech giant’s usual business of smartphones and tablets. But the idea of a robotic “iCar” raises an intriguing possibility: What if self-driving cars were represented by virtual assistants similar to Apple’s Siri for the iPhone and iPad? A version of Siri for self-driving cars might even adopt a familiar virtual face on a display screen to win the trust of potential human owners.
Getting people to trust in a future world filled with self-driving cars could go a long way toward making commutes both safer and faster. The promise of robot cars has driven both traditional automakers and tech giants such as Google and China’s Baidu to develop their own versions of the technology. Even Apple apparently has “hundreds of engineers” working on automotive technologies related to self-driving cars, according to documents unearthed by The Guardian. If such companies ever decide to create “virtual drivers” as the faces of self-driving cars, a recent study suggests that having a familiar virtual face could help put human riders at ease.
“We think that the most prominent ‘bump’ in the road to successful implementation of smart cars is not the technology itself but, rather, the acceptance of that technology by the public,” said Frank Verberne, a behavioral scientist at Eindhoven University of Technology in the Netherlands, in a press release. “Representing such complex automation technology with something that humans are familiar with—namely, a human behind the wheel—may cause it to become less of a ‘black box.'”
A Familiar Face Behind the Wheel
Many people still don’t trust self-driving cars with their lives or the lives of loved ones. In 2014, more than 75 percent of people surveyed by insurance.com said they would not trust a self-driving car to take their kids to school. The same survey showed only 31 percent of the respondents were willing to let the car drive itself whenever possible. Verberne and his colleagues decided to examine how familiar virtual faces representing self-driving cars could affect the trust people had in the technology.
The study investigated the idea that a familiar virtual face based on the specific person going along for the ride would prove more trustworthy than a dissimilar virtual face. Results suggested that such familiar faces did end up boosting people’s trust in their “virtual driver,” as detailed in the Aug. 15 issue of the journal Human Factors. That’s a potentially important finding if Apple or Microsoft ever decide to flesh out Siri or Cortana as virtual driving assistants. But unlike Siri, the virtual driver in this study remained voiceless.
The Dutch researchers created a virtual driving assistant named “Bob” with a digitally-created face. Part of Bob’s face was based on a default male face. The other part of Bob’s face was tailored to look like individual study participants based on their facial features. (There was no female version of Bob.) A total of 111 adults with driver’s licenses—evenly split in terms of male and female participants—took part in the study.
Bob acted as virtual driver while study participants sat in a driving simulator. Half of the participants got to ride along with a Bob who resembled them, whereas the other half of the participants got a Bob with a dissimilar face. The familiar Bob also behaved in two additional ways calculated to help increase trust. First, he mimicked the head movements of participants, with a four-second delay to avoid any creepiness. Second, Bob displayed the same driving goals as the study participant on a computer screen. (Participants were asked to rank their goals in terms of comfort, energy efficiency and speed.)
The Measure of Trust
In the end, participants rated the familiar Bob as more trustworthy than the dissimilar Bob during driving scenarios leading up to road obstacles such as shallow or sharp turns, a traffic jam, a red traffic light or a fallen tree on the road. But the study’s limitations still leave many unanswered questions about how a virtual driver might work in reality, the researchers noted.
One limitation of the study came from the fact that the driving scenarios all stopped just before the critical moment of dealing with the road obstacle.It’s possible that dissimilar Bob could have won an equivalent level of trust as familiar Bob if he had shown participants that he could successfully navigate such obstacles. But in this case, Verberne and his colleagues intentionally chose to stop short so that they could focus on measuring levels of trust in the midst of uncertainty.
The three types of similarity in the familiar Bob—face, head movement and shared driving goals—did not seem to add up to more overall trust compared with previous studies that tested just one type of similarity. But having a virtual driving assistant with as many similarities as possible might appeal to different self-driving car owners who subconsciously value one type of similarity over the other.
Making Siri for Your Robot Car
Such research represents just a first step toward understanding how a virtual driver might make self-driving cars appear more friendly. For example, the study did not directly test whether having a self-driving car represented by a virtual assistant increases human trust compared with a silent, faceless robot car. It’s also possible that just having a faceless virtual driver with a likeable voice and winning personality might also do the trick; imagine the voice of the robot car KITT from the 1980s TV show “Knight Rider” or the operating system “Samantha” voiced by Scarlett Johansson in the 2013 film “Her.”
A virtual driving assistant with both a friendly face and voice might seem like the obvious end goal. But researchers may still have to tread carefully in finding the right combination. In a past study, Verberne discovered that combining an artificial-sounding voice with a face similar to the human owner could actually creep people out. He explained in an email:
I have done one study in which the face was voiced (with an artificial computer voice), but there was a negative effect of voice on trust when the face was similar to the participant. So I concluded that using an artificial alongside a similar face can backfire in generating trust. Using a voiced virtual assistant could work, however I don’t know what factors make a voice trustworthy.
In any case, automakers have already spent decades carefully crafting the exterior “faces” of their cars as represented by the headlights, hood and grille. Even Google has designed its first custom-made robot car with the appearance of a cuddly Koala so that it may prove more harmless to nearby cars, cyclists or pedestrians. A virtual face and personality for a self-driving car may simply represent the next logical step in helping humans fall in love with the next generation of automobiles.