How to Train Your Robot with Brain Oops Signals

By Jeremy Hsu | March 6, 2017 4:03 pm
A system that interprets brain oops signals enables human operators to correct the robot's choice in real-time. Credit: Jason Dorfman, MIT CSAIL

A system that interprets brain signals enables human operators to correct the robot’s choice in real-time. Credit: Jason Dorfman, MIT CSAIL

Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot’s learning success relies upon a system that interprets the human brain’s “oops” signals to let Baxter know if a mistake has been made.

The new twist on training robots comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans.

“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” said Daniela Rus, director of CSAIL at MIT, in a press release. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we haven’t even invented yet.”

The human volunteers wore electroencephalography (EEG) caps that can detect those oops signals when they see Baxter the robot making a mistake. Each volunteer first underwent a short training session where the machine-learning software learned to recognize their brains’ specific “oops” signals. But once that was completed, the system was able to start giving Baxter instant feedback on whether each human handler approved or disapproved of the robot’s actions.

It’s still far from a perfect system, or even a 90-percent accuracy system when performing in real time. But researchers seem confident based on the early trials.

The MIT and Boston University researchers also discovered that they could improve the system’s offline performance by focusing on stronger oops signals that the brain generates when it notices so-called “secondary errors.” These errors came up when the system misclassified the human brain signals by either falsely detecting an oops signal when the robot was making the correct choice, or when the system failed to detect the initial oops signal when the robot was making the wrong choice.

By incorporating the oops signals from secondary errors, researchers succeeded in boosting the system’s overall performance by almost 20 percent. The system cannot yet process the oops signals from secondary errors in actual live training sessions with Baxter. But once it can, researchers expect to boost the overall system accuracy beyond 90 percent.

The research also stands out because it showed how people who had never tried the EEG caps before could still learn to train Baxter the robot without much trouble. That bodes well for the possibilities of humans intuitively relying on EEG to train their future robot cars, robot humanoids or similar robotic systems. (The study is detailed in a paper that was recently accepted by the IEEE International Conference on Robotics and Automation (ICRA) scheduled to take place in Singapore this May.)

Such lab experiments may still seem like a far cry from future human customers instantaneously correcting their household robots or robot car chauffeurs. But it could become a more practical approach for real-world robot training as researchers tweak the system’s accuracy and EEG cap technology becomes more user-friendly outside of lab settings. Next up for the researchers: Using the oops system to train Baxter on making right choices with multiple choice situations.

CATEGORIZED UNDER: technology, top posts
  • Arman Malik

    Its An Interesting Robotic technology,World Gonna Need this. Thumbs Up (y).

  • Uncle Al

    Baxter the robot can tell the difference between right and wrong actions without its human handlers Baxter is a Hater.

    • Anthony King

      I was paid 104000 dollars last 12 months by doing an online job moreover I was able to do it by w­orking in my own time f­o­r quite a few hours each day. I used work opportunity I came across on-line and also I am delighted that I was capable of to earn such decent money. It is actually newbie-friendly and I am so delighted that I discovered out about it. Take a look at what I do… http://nubr­.­co/M76oKV

  • myrobostation

    I am really satisfied with the course. Everything was so well structured! And the material we were provided with was really useful.


Lovesick Cyborg

Lovesick Cyborg examines how technology shapes our human experience of the world on both an emotional and physical level. I’ll focus on stories such as why audiences loved or hated Hollywood’s digital resurrection of fallen actors, how soldiers interact with battlefield robots and the capability of music fans to idolize virtual pop stars. Other stories might include the experience of using an advanced prosthetic limb, whether or not people trust driverless cars with their lives, and how virtual reality headsets or 3-D film technology can make some people physically ill.

About Jeremy Hsu

Jeremy Hsu is journalist who writes about science and technology for Scientific American, Popular Science, IEEE Spectrum and other publications. He received a master’s degree in journalism through the Science, Health and Environmental Reporting Program at NYU and currently lives in Brooklyn. His side interests include an ongoing fascination with the history of science and technology and military history.


See More

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar