Forget Bans: UN Stuck on Defining Killer Robots

By Jeremy Hsu | December 18, 2017 10:28 pm
An unmanned military robot rolls out of a U.S. Marine amphibious vehicle during the Ship-to-Shore Maneuver Exploration and Experimentation Advanced Naval Technology Exercise 2017 at Marine Corps Base Camp Pendleton, California. Credit: Lance Cpl. Jamie Arzola

An unmanned military robot rolls out of a U.S. Marine amphibious vehicle during the Ship-to-Shore Maneuver Exploration and Experimentation Advanced Naval Technology Exercise 2017 at Marine Corps Base Camp Pendleton, California. Credit: Lance Cpl. Jamie Arzola

A United Nations meeting on lethal autonomous weapons ended in disappointment for advocates hoping that the world would make progress on regulating or banning “killer robot” technologies. The UN group of governmental experts barely even scratched the surface of defining what counts as a lethal autonomous weapon. But instead of trying to create a catch-all killer robots definition, they might have better luck next time focusing on the role of humans in controlling such autonomous weapons.

That idea of focusing on the role of humans in warfare has been supported by a number of experts and non-governmental organizations such as the International Red Cross. It would put the spotlight on the legal and moral responsibilities of soldiers and officers who might coordinate swarms of military drones or issue orders to a platoon of robotic tanks in the near future. And it avoids pitfalls surrounding the challenge of trying to define lethal autonomous weapons when artificial intelligence and robot technologies continue to evolve much faster than the slow-grinding gears of a UN body that meets just once a year.

“One criticism people have made, and rightly so, is that if you craft a ban on the state of technology today, you could be wrong about the technology in the near future,” says Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security (CNAS) and author of the upcoming book “Army of None” scheduled for publication in spring 2018. “In this area of lethal autonomous weapons, you might be very wrong.”

One non-military example of how quickly AI technology outpaces regulatory discussions comes from the case of DeepMind Lab’s AlphaGo program. In the first half of 2016, AlphaGo defied expert predictions from just a few years ago by defeating the best human players in the ancient board game of Go. In 2017, DeepMind released an upgraded version of AlphaGo, called AlphaZero, which learned how to play chess within four hours and proceeded to beat the very best specialized chess-playing computer programs.

During that same period of jaw-dropping progress in AI, the international community accomplished very little despite convening several UN meetings on lethal autonomous weapons. The latest UN meeting held in November 2017—involving the Group of Governmental Experts on Lethal Autonomous Weapons Systems—had few accomplishments other than agreeing to meet again for 10 days in 2018 and prepare to do it all again.

The Stumbling Block on Banning Killer Robots

A big problem for advocates looking to ban lethal autonomous weapons is that they have no support from the leading military powers that would be most likely to deploy and use such autonomous weapons. Many leading AI researchers and Silicon Valley leaders have called for a ban on autonomous weapons. But non-governmental organizations (NGOs) mostly stand without the support of national governments in trying to convince the world’s military giants that they should avoid adding lethal autonomous weapons to their arsenals.

“You have a cadre of NGOs basically telling major nation states—a number of great powers such as Russia, China and the United States that have all said AI will be central to the future of national security and warfare—that they can’t have these weapons,” Scharre says. “The reaction of the military powers is, ‘Of course I would use them responsibly, who are you to say?'”

This seems consistent with past expectations of the likelihood that a ban on lethal autonomous weapons might succeed. In October 2016, the Chatham House Think Tank based in London held a roleplaying exercise to imagine a future scenario where China becomes the first country to use lethal autonomous weapons in warfare. That exercise, which focused on the viewpoints of the United States, Israel and European countries, found that none of the experts roleplaying various governments were willing to sign onto even a temporary ban on autonomous weapons.

NGOs such as The Campaign to Stop Killer Robots point to the fact that at least 22 countries want a legally binding agreement to ban lethal autonomous weapons. But Scharre noted that none of those countries are among the major military powers developing the necessary AI technologies for deploying lethal autonomous weapons.

Russia’s Says Nyet to the Ban

In fact, Russia may have already dug the proverbial grave for any potential killer robots ban by announcing that it would not be bound by any international ban, moratorium or regulation on lethal autonomous weapons. Journalist Patrick Tucker at Defense One described the Russian statement that coincided with the UN meeting of governmental experts in the following way.

Russia’s Nov. 10 statement amounts to a lawyerly attempt to undermine any progress toward a ban. It argues that defining “lethal autonomous robots” is too hard, not yet necessary, and a threat to legitimate technology development.

Tucker went on to cite several anonymous experts who attended the UN meeting who complained that the five-day meeting barely even touched on the fundamental step of defining lethal autonomous weapons.

Finding common ground on definitions of killer robots may seem like basic stuff, but in some sense it’s a necessity for governmental representatives to make sure they’re not just talking past one another. “One person might be envisioning a Roomba with a gun on it, another person might be envisioning the Terminator,” Scharre says.

How Killer Robots Could Change Human Soldiers

Perhaps major military powers such as Russia and the United States may find themselves better able to agree upon the obligations and responsibilities of the humans issuing orders to future swarms of autonomous weapons. But potential pitfalls remain even if they succeed there. One of the biggest challenges is that the rise of killer robots could lead to military leaders or individual soldiers feeling less responsible for their actions after unleashing a swarm of killer robots upon the battlefields of tomorrow.

“The thing that worries me is what if we get to the point where humans are accountable, but the humans don’t actually feel like they’re the ones doing the killing and making decisions anymore?” Scharre says.

The heart of the military profession is about making decisions on the use of force. As a former U.S. Army Ranger, Scharre expressed concern that lethal autonomous weapons could end up creating more psychological distance between a soldier’s sense of individual responsibility and the act of using a potentially lethal weapon. Yet he noted that very little has been written about these future implications for military professional ethics.

In other words, the world could eventually clarify the legal framework for how humans hold moral and legal responsibility in warfare when wielding lethal autonomous weapons. But the rise of killer robots may still lead military leaders and individual soldiers to feel less empathy and restraint for those people on the receiving end of such weapons–and make it easier for them to forget about their moral and legal responsibilities.

“I think technology has forced upon us a fundamental question of the human role in the lethal decision making in war,” Scharre says.

CATEGORIZED UNDER: technology, top posts
ADVERTISEMENT
  • OWilson

    People kill people, not guns, bombs, or robots.

    he U.N. gave up trying to end Wars and Terorism, years ago.

    They now keep themselves busy (and in the Diplomatic Level of Luxury they are used to) by useless meetings such as this.

    And by trying to forecast the weather, a hundred years from now! :)

    • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

      How can we differentiate between a corrupt, bloated, self-serving, closed-lop bureaucracy and the UN? Studies!!!
      DOI:10.1016/j.jflm.2017.07.014

      • OWilson

        They love to complicate things to seduce the low info voter!

        They would not dare come out with a U.N. Ban on all Weapons Of War, as a solution to End All Wars.

        (Take the gunslinging cops off your city streets and see what happens!) :)

  • Mark Gubrud

    You start off by saying that advocates for a ban were disappointed by the first meeting of the GGE, but you apparently did not ask any of us what we actually think. This claim that Russia is the showstopper is utterly false. Russia’s policy is following that of the US, which since 2012 has been fully committed to developing, deploying and using autonomous weapons. Paul Scharre wrote the US policy and can hardly speak for the opposition to killer robots, even if he does acknowledge many of our points. It is also false that no states support the ban. In fact, a majority of states at the GGE supported some kind of legally-binding measure, in opposition to the position of the US and Russia. At least 22 of them support a ban, or equivalently, a requirement for meaningful human control. Definitions are not the problem. Political will is the problem.

  • anon

    Jeremy, the Campaign to Stop Killer Robots would be happy to talk to you direct & provide our review of the recent UN meeting. If you have any questions, please contact Mary Wareham, who coordinates this global coalition of non-governmental organizations

  • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

    Arguably the first techological killer robot was proposed by actress Hedy Lamarr as frequency hopping spread spectrum targeting communication, patented in 1941. Where is the outrage?

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Lovesick Cyborg

Lovesick Cyborg examines how technology shapes our human experience of the world on both an emotional and physical level. I’ll focus on stories such as why audiences loved or hated Hollywood’s digital resurrection of fallen actors, how soldiers interact with battlefield robots and the capability of music fans to idolize virtual pop stars. Other stories might include the experience of using an advanced prosthetic limb, whether or not people trust driverless cars with their lives, and how virtual reality headsets or 3-D film technology can make some people physically ill.

About Jeremy Hsu

Jeremy Hsu is journalist who writes about science and technology for Scientific American, Popular Science, IEEE Spectrum and other publications. He received a master’s degree in journalism through the Science, Health and Environmental Reporting Program at NYU and currently lives in Brooklyn. His side interests include an ongoing fascination with the history of science and technology and military history.

ADVERTISEMENT
ADVERTISEMENT

See More

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+