Trust between humans and robots can be tricky business. Early surveys have suggested people still hesitate to trust their lives with robotic vehicles such as self-driving cars. But a new study examines the opposite problem of how people may trust robots even when the machines make obvious mistakes during emergencies.
The study by the Georgia Tech Research Institute supposedly represents the first research to test human-robot trust in an emergency situation. Human volunteers who participated in the study were told to follow a brightly-colored “Emergency Guide Robot” as it led them to a conference room. The study participants obediently followed the robot for the most part, even when it seemed to lose its way or sometimes traveled in a circle. To the surprise of researchers, the humans still seemed to trust the robot’s directions even during a fake emergency triggered by artificial smoke setting off the fire alarm.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI), in a press release statement. “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”
The enduring human trust in robots seemed especially unexpected given past research findings. A previous simulation study done without the emergency scenario had suggested people would not trust a robot that had made mistakes. Such mistakes were clearly made visible to the human volunteers in this latest study as well. Besides losing its way or circling aimlessly, the robot even stopped moving for several human participants. In those cases, a researcher told the volunteers that the robot had broken down.
When the fake emergency began, the robot tried to act as a guide with its brightly-lit red lights and white arms to point the way. But for the study, the robot intentionally directed the human volunteers to an exist in the back of the building instead of toward the entrance doorway that displayed clear exit signs. Some people followed the robot even when it led them toward a dark room blocked by furniture.
The study, funded by the U.S. Air Force Office of Scientific Research and Georgia Tech’s Linda J. and Mark C. Smith Chair in Bioengineering, involved just 42 study participants and took place in a lab setting. Much larger experiments set in the real world–perhaps involving self-driving vehicles–will likely be necessary to better understand the relationship between humans and robots. Still, this study represents just one part of longer-term research on how humans trust robots as robots become more common companions. A presentation on the study is scheduled for March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in New Zealand.
Trusting a robot goes beyond just emergency life-or-death situations. Some modern cars increasingly have the capability to drive by themselves on highways or in other limited scenarios. Tech companies and automakers have been testing fully autonomous self-driving cars that could drive themselves all the time without human intervention. Robots may also graduate from being household vacuum cleaners to doing more complex tasks such as taking out the trash. Or they may even take on greater responsibilities such as caring for kids or elderly parents, said Paul Robinette, a research engineer at GTRI who conducted the study as part of a doctoral dissertation.
Would people trust a hamburger-making robot to provide them with food? If a robot carried a sign saying it was a ‘child-care robot,’ would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma’s house? We don’t know why people trust or don’t trust machines.
Researchers have mostly concerned themselves with the problem of getting humans to trust robots. Google and other companies building driverless vehicles have considered the self-driving cars’ appearances as one possible factor for trust. Behavioral scientists at the Eindhoven University of Technology in the Netherlands tested virtual avatars with faces for self-driving cars as a way to win the trust of human passengers.
By comparison, the latest study raises the different question of whether humans should only trust robots up to a certain point. Ideally, a robot that malfunctioned or made a mistake would display an obvious indicator telling humans they should stop blindly trusting its decisions. But that solution could prove tricky if the robot itself cannot tell it’s making a mistake. The future of working relationships between humans and robots may still prove much more complex than just kicking back, switching off our brains and letting the robots get to work.
The U.S. Air Force’s funding of such research also makes sense considering how computers and semi-autonomous systems already play a big role on modern battlefields. At some point in the future, human warriors will almost certainly find themselves putting their lives in the hands of walking military robots or perhaps flying drone ambulances. Their decision on whether or not to trust the judgment of a machine during the heat of battle may have life or death consequences.