overviews / hype, robotics, regulation,

The Rise of “Killer Robots” and the Race to Restrain Them

Truly autonomous lethal weapons don't exist yet, but that may not be true for long unless regulations are passed

The Rise of “Killer Robots” and the Race to Restrain Them

Key Takeaways

  • Unlike the Terminator-like humanoids that may come to mind when you hear the phrase ‘killer robots’, lethal autonomous weapon systems (LAWS) can take many forms that are more like the weapons deployed today, such as drones, tank turrets, submarines, missile-defense systems, and sentry robots. The common thread pulling LAWS together is lethal autonomy: the ability for these weapons to select and engage with objects, including people or military targets, without human help.
  • With AI and robotics technology rapidly advancing, experts are warning that so-called ‘killer robots’ (or more formally, LAWS) are just years, not decades, away from the battlefield. To date no country has deployed LAWS to the battlefield, although at least a dozen are developing them and the technology is advanced enough to do so.
  • The US military is a world leader in the technical as well as ethical and legal development of killer robots. In 2012, it issued Directive 3000.09 codifying the rule that people hold the reins and retain meaningful control over the use of LAWS. Then, earlier this year the Department of Defense adopted a set of 5 ethical principles to guide the development and use of autonomous weapon systems.
  • While the US is a leader in the area, the AI arms race is global: at least 12 nations have acknowledged that they are developing LAWS, and global spending on automated weapon systems is projected to reach $16 billion by 2025.
  • The AI arms race is accelerating, because no strong state wants to tie its military’s hands from using powerful technology by agreeing to an international ban. Despite warnings that killer robots are driving a third revolution in warfare after gunpowder and nuclear weapons, states and international institutions are struggling to agree on how to regulate LAWS, if at all. Diplomats have been meeting at the UN for six years to discuss killer robots, but little progress towards a ban, treaty, or arms control agreement has been made.

Introduction

For many people, “killer robots” bring to mind dystopian images from science fiction, such as the human-like androids of Bladerunner or Arnold Schwartzenegger’s intimidating Terminator robot. While such robots that are frequently portrayed in the media are still entirely fictional, real-world “killer robots” — or more formally, lethal autonomous weapon systems (LAWS) — are very much in the process of being developed. LAWS are systems that can select and fire on targets without human help, and prototypes exist in a range of weapons and military vehicles, including drones, submarines, tank turrets, missile-defense systems, and stationary sentry robots. While such systems are not widely deployed yet, many governments are actively developing such technology, so it is crucial to consider its implications and necessary restrictions for it now.

Credit: Future of Life Institute

In fact, in 2015 more than 3000 experts signed “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”, which warned that “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” They specifically warned of the dangers of the uncontrolled spread of LAWS — likening them to the Kalaknishkovs1 of the 21st century. Researchers and diplomats have warned that LAWS pose a destabilizing threat to the international order with five core risks:

  • lowering the threshold for conflict because fewer lives are at stake;
  • carrying out assassinations anonymously and remotely;
  • amplifying terrorist attacks; - subduing populations or selectively killing a particular ethnic group; and
  • locking competing algorithms into loops of conflict escalation — possibly nuclear conflict — without a person being able to break the cycle.

Despite these warnings, 30 global arms manufacturers have been identified by the peace nonprofit Pax to be actively developing LAWS.

Few laws exist to constrain the use of killer robots. At the international level, no law or policy deals specifically with LAWS or other automated systems. However, since this technology will be used by the military to kill, certain international rules and rights apply that restrict all weapons of war; like any weapon, these technologies are governed by the laws of war and the use of force, which state that no weapon can be used to kill civilians indiscriminately or without a clear military objective behind the strike. However, experts stress that LAWS may lower the threshold to conflict and cannot be guaranteed to accurately discriminate civilians from soldiers, so the laws of war and use of force rules do not adequately address the threat LAWS pose. A treaty or arms control agreement constraining the use of LAWS, they argue, is necessary.

Despite the growing danger of LAWS being unleashed on the battlefield, diplomats are still scrambling at the United Nations (UN) to ban or bring LAWS under an arms control agreement. But, major powers, including the US, China, and Russia, are blocking progress towards a framework of any kind. In this piece, we’ll survey the state of killer robots (or, LAWS), as well as current efforts on their regulatory framework.

What are “killer robots”?

In the Terminator movies, John Conner is on a mission to save the world from a terrible future: the US military’s AI system, Skynet, becomes self-aware, starts a global nuclear war, and then builds an army of robotic soldiers to finish killing off mankind. These robot soldiers make their evil intent clear, marching with rifles in hand and radiating malevolence from their red eyes. In real life, however, spotting a “killer robot” is not so simple.

AI’s general-purpose nature may be attractive to the military, but it also makes pinning down what exactly is a killer robot complicated. The common thread pulling them together is lethal autonomy, although the specific weapon is different in each case. But, defining autonomy in LAWS is more complicated than it seems. Drones, for example, display levels of autonomy ranging from none (human piloting it and selecting and firing on targets), partial (flying autonomously but a person selects targets), to total independence from human operators. Plus, the term LAWS can describe a variety of possible weapon systems, from sentry robots and other self-defense systems to offensive ones like drones and wingmen for fighter pilots. As Toby Walsh, professor of artificial intelligence and expert on its military applications, explained:

“When people hear ‘killer robots,’ they think Terminator, they think science fiction, they think of something that’s far away […] Instead, it’s simpler technologies that are much nearer, and that are being prototyped as we speak.”

While AI and robotics have made rapid progress over the past decade, the technology is still too limited to replicate everything a human soldier can do. Machine learning algorithms are brittle, biased, and vulnerable to both hacks and a more subtle kind of manipulation called adversarial attacks. Likewise, robots in the field struggle to move like soldiers do, communicate with them by listening to commands and reading body language, and distinguish between enemy soldiers and civilians. What machines are good at is automating specific tasks a soldier might do. Keeping watch, collecting and analyzing information, monitoring the battlefield, driving a tank or targeting and shooting at people — these are areas where AI is making its impact on war.

Just in the US, for example, nearly every branch of the military is developing LAWS:

  • The Army is building a 50-mm turret called Advanced Targeting and Lethality Automated System (ATLAS) to put on tanks. ATLAS can autonomously detect targets, determine if they are hostile, and train its cannon on them with superhuman speed and accuracy.
  • The Navy has installed an automated version of its Aegis missile system that reacts faster than humans can to shoot down incoming missiles or manned targets. The branch has also sailed a ship from California to Hawaii and back without human help.
  • The Air Force is a pioneer in developing lethal autonomy in drones. It has flown an autonomous version of the infamous Predator drone that can track and kill targets as well as shown off swarms of small drones that can communicate with each other and act collectively. Pushing automated lethality to its limits, the Air Force has gone as far as to propose pairing fighter pilots with robot wingmen to fly and fight alongside them. The Predator drone has been at the forefront of the unmanned system revolution in the military; one modern warfare expert Peter Singer has described it as “merely the first generation—the equivalent of the Ford Model T or the Wright Brothers’ Flyer.”2 In August, foreshadowing an evolution in drones that will take them beyond being the Model T, an AI-powered fighter pilot beat a human pilot in a series of simulated F-16 dogfights.
  • The US is projected to spend $17.5 billion on drones between 2017-2021, including 3,447 new unmanned ground, sea, and aerial systems despite already owning 20,000 autonomous vehicles. Each branch is adapting autonomy to fit their needs. In the air, sea, and on land, the development of LAWS is transforming how the US military fights.
Credit: Cory Payne/Shutterstock via The Guardian

While the US is a leader in developing the building blocks of killer robots, it is not alone in its pursuit of LAWS. Global military spending on automated weapon systems and AI, narrowly defined, is projected to reach $16 and $18 billion respectively by 2025. At the latest count, “at least 381 partly autonomous weapon and military robotics systems have been deployed or are under development in 12 states, including China, France, Israel, the UK, and the US. Of the 12 states that have acknowledged developing LAWS, five stand apart as leaders in the global AI arms race: the US, China, Russia, South Korea, and the European Union.”3 These countries are building automated weapon systems — ranging from missiles, drones, and cyber-weapons to rifles, tanks, and ships — to expand the firepower and fighting capability of their military. For example:

  • Russia is currently building autonomous tanks, missile systems, and a high-caliber rifle that uses neural networks to select its targets. China is a leading exporter of lethal drones that may be switched into a fully autonomous mode. In the South China Sea, where every move is carefully choreographed by the Chinese Communist Party, China recently unveiled a combat-ready, autonomous ship.
  • In the closest example to what comes to mind when people hear “killer robots,” South Korea has deployed an autonomous sentry robot, the SGR-A1, that can select and fire on targets of its choice to guard the Demilitarized Zone (DMZ) between North and South Korea. Samsung, which makes the SGR-A1, has emphasized that it should be used as part of a “human-in-the-loop” system, leaving the decision to shoot up to a human alone. However, the Office of Naval Research and leading roboticist Ronald Arkin have both confirmed the SGR-A1 also functions in an “human-on-the-loop” mode in which it can select and engage with targets without human help, but a nearby human can intervene, if necessary.

More generally, autonomy is creeping into every branch of the military. This is because of the ability of machine-learning algorithms to process and learn a range of human-like behaviors, such as recognizing faces (and friend from foe), accurately firing a weapon, and thinking through tactical and strategic plans in real-time. AI’s general-purpose nature and learning capacity can be applied to a wide range of applications both in the field and back at base.

Credit: Getty via NBC News

What rules govern the use of killer robots in the US?

Despite the widespread media coverage and significant public concern4 about killer robots, few laws constrain or regulate their use. One country that does have a specific rule for LAWS is the US. Already a leader in the technical development of killer robots, the US wants to lay down the legal and ethical rules of the road regarding LAWS. Written in 2012, Department of Defense (DoD) Directive 3000.09 requires that “autonomous and semi-autonomous systems be designed to allow for appropriate levels of human judgment before using lethal force” and that these systems may not independently select human targets without human oversight. However, it is unclear what constitutes appropriate human oversight in these contexts or how this would play out in a real-time scenario.

Last year, the Army caused some controversy when it released details of its ATLAS gun that uses AI to select and engage with targets without human intervention, sparking criticism that it had greenlit an arms race to build LAWS. The DoD rewrote the description of the ATLAS program to emphasize that as of now ATLAS cannot pull the trigger without human help and that all weapons, including autonomous ones, adhere to Directive 3000.09. However, as Paul Scharre, former Army Ranger, expert on LAWS, and one of the drafters of the directive has pointed out:

“The US Defense Department policy on autonomy in weapons doesn’t say that the DoD has to keep the human in the loop. It doesn’t say that. That’s a common misconception.”

Human-in-the-loop systems require humans to select targets and decide to engage, so the weapon is autonomous in the sense that it can follow and fire on targets once a human approves. These systems retain human control over each decision made (e.g., selecting the target and also deciding if/how it will be engaged with). By contrast, for human-on-the-loop systems, a person can watch and intervene if need-be but doesn’t need to approve every one of the killer robot’s actions. The Army may explore a less hands-on approach with ATLAS in the future, and the program is going forward despite the lack of clarity surrounding whether truly autonomous weapons are being developed.

Credit: Breaking Defense

Bolstering Directive 3000.09 are five ethical principles that the DoD adopted in February to guide its development of AI and LAWS. The DoD wants its work with AI and killer robots to be:

  • Responsible: the DoD “will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities”;
  • Equitable: the Pentagon will ensure that biases are rooted out;
  • Traceable: the DoD develop and deploy AI “with transparent and auditable methodologies, data sources, and design procedure and documentation”;
  • Reliable: the Department will limit LAWS to “explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing”; and
  • Governable: all AI and unmanned systems need to have the ability to achieve their goals without unintended consequences as well as deactivate or disengage any deployed system.

These are, however, voluntary principles that the Pentagon adopted, and they do not carry the force of law. Thus, with little oversight into LAWS development, it is difficult to ensure that the DoD sticks to its ethical principles. It is also unclear what happens if they are caught ignoring them. In fact, the same bipartisan commission on AI that developed DoD’s ethical principles also urged the Department not to let ethical debates “paralyze AI developments,” including advancing autonomy in a range of military operations.

What does the international regulatory framework governing LAWS look like?

Despite rapid advances in LAWS and growing pressure to constrain a possible AI arms race, states and international institutions are struggling to agree how the threat should be addressed. In January, diplomats met at the UN to consider guardrails for killer robots for the sixth consecutive year, feeling more pressure than during the previous years, because the technology is “fast progressing from graphic design boards to defence engineering laboratories.” Legal experts and defense analysts paint a grim picture, saying that without a framework constraining their use, LAWS could soon “be deployed by state militaries to the battlefield, painting dystopian scenarios of swarms of drones moving through a town or city, scanning and selectively killing their targets within seconds.” Despite the growing danger, strong military powers like the US, Russia, and China are currently blocking action at the UN that would set limits on the use of LAWS. While no state has endorsed their use, these countries will not agree to preemptively disarm themselves of a potentially powerful technology like LAWS.

At the international level, no rules have been written to address LAWS or the use of other automated systems. However, all weapons of war are bound by international humanitarian law and the laws in war (jus in bello). To be legal, the use of force by a state satisfy three principles:

  • proportionality: the number of civilians killed cannot be excessive relative to the value of the military target destroyed;
  • distinction: the military must make every effort to distinguish between civilians and combatants; and
  • necessity: ensuring that the target was essential to the overall war effort.

Some robotics experts argue that machines can be programmed to follow the laws of war and that it may in fact be more ethical to use LAWS. Unlike humans, AI doesn’t make mistakes because of fatigue, lack of sleep, or a distraction. Others, however, worry that these systems are often biased — facial recognition has repeatedly been shown to be less accurate on nonwhite people — and will be unable to reliably distinguish between civilians and soldiers. Plus, AI struggles to take into consideration new information or the context of a situation, leaving killer robots to rigidly follow their programming. Without transparency surrounding the data and algorithms as well as some degree of explainability from algorithms, it will be difficult to ensure that killer robots follow the laws of war.

Drones are the most commonly used unmanned system around the world, so a look at how the US has used drones abroad can illuminate what future policies constraining LAWS might look like. The War on Terror legal framework is the Authorization of Military Force (AUMF), first passed after 9/11 to give the President the power to pursue those responsible for grave acts of terror. Since this has been the guiding framework for the drone usage in wars, LAWS may be “regulated” under the AUMF first. Short but expansive, the AUMF empowers the President “to use all necessary and appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided the terrorist attacks that occurred on September 11, 2001.” Written in response to 9/11, the AUMF has been used to greenlight wars in Afghanistan and Iraq as well as a drone campaign that stretches across 13 countries, killing an estimated 8,000 to 9,700 civilians and 15,000 to 23,000 people total.

Credit: Smithsonian

Anwar Al Awlaki’s case is an instructive example of how government action can create policy in the absence of a prescriptive framework. Anwar Al Awlaki was an American citizen and Islamic preacher who fled the US after federal authorities pressured him to. He joined Al-Qaeda in Africa and continued preaching radical rhetoric, earning him a place on the drone kill list. Despite being an American citizen, Awlaki was killed in a “signature strike,” a bomb dropped by a Predator drone on a “high-value target,” without his due process rights. The court considered the killing legal under AUMF despite the government admitting it had no evidence of an imminent attack and presented none at trial.

US drone use presages a troubling future in which the government reserves the right to kill anyone in a secretive, legalistic, and highly-automated process that largely operates outside the scope of the law. Like the drone program, a gap in rules or regulations specifically constraining LAWS may encourage the government to expand its power and reach while burying the evidence. The few federal courts that have looked at this issue have punted on crucial Constitutional questions presented by Obama’s drone program, particularly when this secretive apparatus was used to kill three American citizens.

What are the policy options at the international level for LAWS?

At the international level, killer robots are treated no differently than other weapons of war. There are three ways of constraining their use: banning them via a treaty, limiting their spread via a nonproliferation agreement, or enacting an arms control regime akin to the New START treaty that limited nuclear weapon stockpiles of the US and Russia. Many countries and nonprofit organizations, such as the Campaign to Stop Killer Robots and Human Rights Watch, have called for a ban on LAWS. But the US, China, and Russia are dragging their feet, blocking a blanket ban and insisting that the technology be more rigorously defined before regulation is discussed. Plus, the international system lacks a reliable way to enforce a ban even if the leading powers agree to one.

Credit: Campaign to Stop Killer Robots

What about a nonproliferation agreement, like the scientists suggested in their open letter? The model would be the nuclear nonproliferation agreements, but a key ingredient missing here is traceability. Nuclear material leaves by radioactive footprints that can be physically traced whereas autonomous weapons are powered by software that leaves digital clues behind to be followed. Also, nuclear technology requires a high level of scientific knowledge and engineering expertise and is expensive to work with, so preventing its spread is much more feasible than trying to restrict software (or even computer chips). Plus, while the Nuclear Nonproliferation Agreement strictly regulates who is allowed to have nuclear weapons, it does not address how nuclear weapons are used. Any nonproliferation agreement will find it hard to control and track technology as general-purpose and ephemeral as AI

Finally, if a ban is too restrictive and a nonproliferation agreement unenforceable, establishing a multilateral arms control regime may be the most likely option adopted at the international level. For example, the Convention on Conventional Arms is an arms control treaty adopted by the UN in the 1980s to restrict the use of conventional weapons that were excessively injurious and indiscriminate, such as land mines, booby traps, and blinding lasers. The committee has been studying whether LAWS meet the same standard, but no consensus has developed yet. An arms control agreement through the CCA could limit the number, kind, and use of killer robots. It might achieve something similar to the New START Treaty, which reduced the number of nuclear weapons in the world.

The Trump administration, however, is moving in the opposite direction. Last month, it modified the Missile Technology Control Treaty that had restricted the sale of armed drones abroad. Under the old agreement, the US could only sell armed drones to England, France, and Australia. But, in response to pressure from private contractors which argue that China was selling armed drones to allies like Saudi Arabia, the Trump administration carved out an exception in agreement for large, armed drones capable of delivering payloads of more than 500 kilograms (1,100 pounds), such as General Dynamics’ MQ-9 Reaper drone. The Predator’s bigger brother, the Reaper is “a hunter-killer drone that can carry up to four Hellfire missiles as well as laser-guided bombs and joint direct attack munitions (JDAMs).” In January, it was a Reaper that killed Iranian General Qassem Solemani in an airstrike. Like the Predator, this hunter-killer drone can fly and fight autonomously. Allies in India, Saudia Arabia, Jordan, and the United Arab Emirates, as well as countries in East Asia and Central Europe, have expressed interest in buying larger drones from the US capable of both search-and-destroy missions using laser-guided munitions and reconnaissance.

Credit: Reuters

Conclusion

Countries readily agree that LAWS are likely to be a dangerous and destabilizing technology for any military to adopt, but no strong state is willing to preemptively tie its hands from developing a potentially transformative technology like AI to its fullest potential. The constricting logic of an arms race is slowly squeezing the chances of an international agreement to prevent the spread or use of LAWS and other automated systems. There is simply too much hard power (i.e., military power) up for grabs from weaving AI and robotics technologies into military operations for them to be constrained by the current international system. Despite the dangers and warnings emanating from experts and diplomats around the world, the global AI arms race is accelerating. Unfortunately, like U-boats in World War I or nuclear weapons in World War II, the international system is unlikely to restrain the unrestricted use of LAWS until after they commit atrocities.




Citation

For attribution in academic contexts or books, please cite this work as

Bryan McMahon, “The Rise of “Killer Robots” and the Race to Restrain Them”, Skynet Today, 2020.

BibTeX citation:

@article{mcmahon2020killerrobots,
author = {McMahon, Bryan},
title = {The Rise of “Killer Robots” and the Race to Restrain Them},
journal = {Skynet Today},
year = {2020},
howpublished = {\url{https://www.skynettoday.com/overviews/killer-robots } },
}

  1. The original name for what’s commonly known as the AK-47. Cheap, easy to make, and reliable, the AK-47 became the weapon of choice of the Soviet Union, rebels, and terrorists everywhere with an estimated 100 million in circulation today. 

  2. Harvard Law School International Human Rights Clinic and Human Rights Watch. “Losing Humanity: The Case v vAgainst Killer Robots.” Human Rights Watch. 2012. 

  3. Within the EU, the United Kingdom, France, Germany, and Italy all are developing LAWS; the other countries are Sweden, Japan, India, and Israel. 

  4. A recent survey conducted by the polling firm Ipsos found that public support for a ban on LAWS increased to 61% of the public polled in 26 countries from 2016 to 2018. 

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x