Boycott Threat Terminated ‘Killer Robot’ Project

By Jeremy Hsu | May 6, 2018 2:58 pm
A fictional killer robot in the film "Terminator 2: Judgment Day." In reality, researchers are less worried about a robot uprising and more concerned about how development of lethal autonomous weapons that take responsibility for killing out of human hands. Credit: Studio Canal | Carolco Pictures

A fictional killer robot in the film “Terminator 2: Judgment Day.” In reality, researchers are less worried about a robot uprising and more concerned about how development of lethal autonomous weapons that take responsibility for killing out of human hands. Credit: Studio Canal | Carolco Pictures

Notable tech leaders and scientists have signed open letter petitions calling for a ban on lethal autonomous weapons powered by artificial intelligence technologies. But a group of AI researchers recently went a step farther by using the threat of boycott to discourage a university from developing so-called killer robot technologies.

It all began in late February when a Korea Times article reported on a leading South Korean defense company teaming up with a public research university to develop military AI weapons capable of operating without human supervision. By March, a group of more than 50 AI researchers from 30 countries had signed an open letter addressed to KAIST, the South Korean university involved in the AI weapons project, that declared the signatories would boycott any research collaborations with the university.

“It is very important for academics who develop the science behind AI that this science be used for the good of humanity,” said Yoshua Bengio, a professor of computer science at the University of Montreal in Canada and a pioneer in deep learning research. “In this case, it was about a university—and a major one—making a deal for potentially developing ‘killer robots.'”

The Korea Times article described the joint project between KAIST (formerly the Korea Advanced Institute of Science and Technology) and the defense company Hanhwa Systems as having the goal of developing “artificial intelligence (AI) technologies to be applied to military weapons” that could include an “AI-based missile,” “AI-equipped unmanned submarines” and “armed quadcopters.” The article also defined such autonomous weapons as being capable of seeking and eliminating targets without human control.

Major military powers such as the United States, China and Russia have been developing AI technologies that could lead to lethal autonomous weapons. Some existing autonomous weapons can already automatically track and fire upon targets, including sentry gun turrets or weapons designed to shoot down incoming missiles or aircraft. However, militaries use these in a defensive capacity and typically keep a human “in the loop” to make the final decision about unleashing such weapons against targets.

Many AI and robotics researchers hope to discourage widespread development of killer robots or drones that shoot to kill without requiring a human to give the order. Non-governmental organizations have also organized to ban lethal autonomous weapons through the aptly-named Campaign to Stop Killer Robots. Even the United Nations has been holding a series of annual meetings to discuss lethal autonomous weapons.

A 2015 campaign spearheaded by MIT physicist Max Tegmark and the Future of Life Institute called for a “ban on offensive autonomous weapons beyond meaningful human control.” That 2015 open letter received backing from science and tech luminaries such as SpaceX and Tesla founder Elon Musk, Apple co-founder Steve Wozniak, and the late physicist Stephen Hawking. It has also been followed up by subsequent open letters and campaigns.

Organizing Resistance Against Killer Robots

Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia, has been active in much of the research community’s resistance against the development of lethal autonomous weapons. So it was no surprise that Walsh once again sprang into action when news of the KAIST-Hanhwa project to develop AI-driven weapons emerged.

“After being alerted to the Korea Times article on ‘Project launched at Korean institute to develop AI weapons,’ Toby Walsh (in his individual capacity) and the Campaign to Stop Killer Robots both wrote to the President of KAIST, but as far as I know did not receive any reply,” said Stuart Russell, a professor of computer science and AI researcher at the University of California, Berkeley. “Toby Walsh, after waiting two weeks, proposed the boycott and asked a number of researchers to sign the letter.”

What was different this time was the call for AI researchers to single out a specific university because of its efforts to develop lethal autonomous weapons technologies. The AI research community’s expertise became a bargaining tool regarding future collaborations with the university.

“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons,” according to the open letter. “We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.”

Taking Responsibility for AI’s Use

Beyond signing the open letter, Russell wrote a personal letter to Steve Kang, a former president of KAIST, in an effort to find out additional information about KAIST’s collaboration with Hanhwa Systems. Russell clarified that Kang apparently had no prior knowledge of the Hanhwa contract, because the KAIST-Hanhwa agreement had been formalized after Kang’s tenure as president of KAIST.

Russell admitted that he normally has “misgivings” about boycotts. But in this case, he saw the proposed boycott regarding KAIST as “appropriate,” because the AI research community has already publicly demonstrated broad agreement in taking a stand against lethal autonomous weapons.

Bengio at the University of Montreal also supported the proposed research boycott of KAIST by signing the open letter addressed to the South Korean university. He had previously supported the 2015 open letter pushing for a ban on lethal autonomous weapons, and had also signed an additional letter addressed specifically to the the prime minister of Canada in the fall of 2017.

“Machines now and in the foreseeable future don’t have any understanding of moral values, of psychology and social questions,” Bengio explained. “So when people’s lives, well-being or dignity is at stake, a human should be in the loop in a significant way, i.e., we cannot automate killing or the decision to keep a person in jail or not.”

Both Bengio and Russell agreed that researchers bear responsibility for guiding the development and use of AI technologies in an ethical manner.

“It’s absolutely our responsibility, just as doctors have a strict policy against participating in executions,” Russell said. “I do not agree with those who say that scientists should stick to science, and leave all the political matters to politicians.”

Keeping an Eye on the Future

KAIST quickly responded to the open letter  from the international research community. By early April, KAIST President Sung-chul Shin had put out a statement to allay researchers’ concerns: “KAIST does not have any intention to engage in the development of lethal autonomous weapons system and killer robots.” In acknowledgement, the researchers called off the proposed boycott.

The apparent success of the proposed boycott could inspire future campaigns to follow in its footsteps. But many universities would likely think twice about pursuing similar research projects that could lead to lethal autonomous weapons. Russell noted that the AI research community would have to remain watchful.

“Short of repudiating the contract itself, this was the best we could hope for,” Russell said. “Many of the signatories have suggested that we will need to keep an eye on the work done at KAIST to see if the President’s declaration has meaning.”

Similarly, Bengio declared himself satisfied with KAIST’s response for now. He also seemed favorable toward the idea of using research boycotts in the future, if necessary.

“Even if it does not work all the time, it’s worth doing it,” Bengio said. “The most important side effect is to educate people, governments and organizations about the moral aspects of the use (or mis-use) of AI.”

Curious about killer robots in science fiction? I previously speculated on why the Galactic Empire and the First Order in the Star Wars films seem to shun lethal autonomous weapons.

CATEGORIZED UNDER: technology, top posts
ADVERTISEMENT
  • OWilson

    They said the same things about Atomic Bomb research.

    But, hey, if the bad guys are doing it, do freedom loving nations have a choice?

  • http://www.mazepath.com/uncleal/EquivPrinFail.pdf Uncle Al

    lethal autonomous weapons powered by artificial intelligence technologies” No social shield real world works. The Pope banned crossbows. So? Afghanistan crushed The US military with propane tanks plus cell phones. Cheaters! Youtube v=1d_9RZ3ShVs

    Ban Sweet’N’ Low packets.. A gram of white powder, about a dozen choices, will do 10,000 people given organic chemistry. Venezuelans lack imagination, both labor and management, even to water cannon doped with senna, cascara, (all natural!) or bisacodyl. Gardyloo!

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Lovesick Cyborg

Lovesick Cyborg examines how technology shapes our human experience of the world on both an emotional and physical level. I’ll focus on stories such as why audiences loved or hated Hollywood’s digital resurrection of fallen actors, how soldiers interact with battlefield robots and the capability of music fans to idolize virtual pop stars. Other stories might include the experience of using an advanced prosthetic limb, whether or not people trust driverless cars with their lives, and how virtual reality headsets or 3-D film technology can make some people physically ill.

About Jeremy Hsu

Jeremy Hsu is journalist who writes about science and technology for Scientific American, Popular Science, IEEE Spectrum and other publications. He received a master’s degree in journalism through the Science, Health and Environmental Reporting Program at NYU and currently lives in Brooklyn. His side interests include an ongoing fascination with the history of science and technology and military history.

ADVERTISEMENT
ADVERTISEMENT

See More

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+