For many people, “killer robots” bring to mind dystopian images from science fiction, such as the human-like androids of Bladerunner or Arnold Schwartzenegger’s intimidating Terminator robot. While such robots that are frequently portrayed in the media are still entirely fictional, real-world “killer robots” — or more formally, lethal autonomous weapon systems (LAWS) — are very much in the process of being developed. LAWS are systems that can select and fire on targets without human help, and prototypes exist in a range of weapons and military vehicles, including drones, submarines, tank turrets, missile-defense systems, and stationary sentry robots. While such systems are not widely deployed yet, many governments are actively developing such technology, so it is crucial to consider its implications and necessary restrictions for it now.
In fact, in 2015 more than 3000 experts signed “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”, which warned that “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” They specifically warned of the dangers of the uncontrolled spread of LAWS — likening them to the Kalaknishkovs1 of the 21st century. Researchers and diplomats have warned that LAWS pose a destabilizing threat to the international order with five core risks:
Despite these warnings, 30 global arms manufacturers have been identified by the peace nonprofit Pax to be actively developing LAWS.
Few laws exist to constrain the use of killer robots. At the international level, no law or policy deals specifically with LAWS or other automated systems. However, since this technology will be used by the military to kill, certain international rules and rights apply that restrict all weapons of war; like any weapon, these technologies are governed by the laws of war and the use of force, which state that no weapon can be used to kill civilians indiscriminately or without a clear military objective behind the strike. However, experts stress that LAWS may lower the threshold to conflict and cannot be guaranteed to accurately discriminate civilians from soldiers, so the laws of war and use of force rules do not adequately address the threat LAWS pose. A treaty or arms control agreement constraining the use of LAWS, they argue, is necessary.
Despite the growing danger of LAWS being unleashed on the battlefield, diplomats are still scrambling at the United Nations (UN) to ban or bring LAWS under an arms control agreement. But, major powers, including the US, China, and Russia, are blocking progress towards a framework of any kind. In this piece, we’ll survey the state of killer robots (or, LAWS), as well as current efforts on their regulatory framework.
In the Terminator movies, John Conner is on a mission to save the world from a terrible future: the US military’s AI system, Skynet, becomes self-aware, starts a global nuclear war, and then builds an army of robotic soldiers to finish killing off mankind. These robot soldiers make their evil intent clear, marching with rifles in hand and radiating malevolence from their red eyes. In real life, however, spotting a “killer robot” is not so simple.
AI’s general-purpose nature may be attractive to the military, but it also makes pinning down what exactly is a killer robot complicated. The common thread pulling them together is lethal autonomy, although the specific weapon is different in each case. But, defining autonomy in LAWS is more complicated than it seems. Drones, for example, display levels of autonomy ranging from none (human piloting it and selecting and firing on targets), partial (flying autonomously but a person selects targets), to total independence from human operators. Plus, the term LAWS can describe a variety of possible weapon systems, from sentry robots and other self-defense systems to offensive ones like drones and wingmen for fighter pilots. As Toby Walsh, professor of artificial intelligence and expert on its military applications, explained:
“When people hear ‘killer robots,’ they think Terminator, they think science fiction, they think of something that’s far away […] Instead, it’s simpler technologies that are much nearer, and that are being prototyped as we speak.”
While AI and robotics have made rapid progress over the past decade, the technology is still too limited to replicate everything a human soldier can do. Machine learning algorithms are brittle, biased, and vulnerable to both hacks and a more subtle kind of manipulation called adversarial attacks. Likewise, robots in the field struggle to move like soldiers do, communicate with them by listening to commands and reading body language, and distinguish between enemy soldiers and civilians. What machines are good at is automating specific tasks a soldier might do. Keeping watch, collecting and analyzing information, monitoring the battlefield, driving a tank or targeting and shooting at people — these are areas where AI is making its impact on war.
Just in the US, for example, nearly every branch of the military is developing LAWS:
While the US is a leader in developing the building blocks of killer robots, it is not alone in its pursuit of LAWS. Global military spending on automated weapon systems and AI, narrowly defined, is projected to reach $16 and $18 billion respectively by 2025. At the latest count, “at least 381 partly autonomous weapon and military robotics systems have been deployed or are under development in 12 states, including China, France, Israel, the UK, and the US. Of the 12 states that have acknowledged developing LAWS, five stand apart as leaders in the global AI arms race: the US, China, Russia, South Korea, and the European Union.”3 These countries are building automated weapon systems — ranging from missiles, drones, and cyber-weapons to rifles, tanks, and ships — to expand the firepower and fighting capability of their military. For example:
More generally, autonomy is creeping into every branch of the military. This is because of the ability of machine-learning algorithms to process and learn a range of human-like behaviors, such as recognizing faces (and friend from foe), accurately firing a weapon, and thinking through tactical and strategic plans in real-time. AI’s general-purpose nature and learning capacity can be applied to a wide range of applications both in the field and back at base.
Despite the widespread media coverage and significant public concern4 about killer robots, few laws constrain or regulate their use. One country that does have a specific rule for LAWS is the US. Already a leader in the technical development of killer robots, the US wants to lay down the legal and ethical rules of the road regarding LAWS. Written in 2012, Department of Defense (DoD) Directive 3000.09 requires that “autonomous and semi-autonomous systems be designed to allow for appropriate levels of human judgment before using lethal force” and that these systems may not independently select human targets without human oversight. However, it is unclear what constitutes appropriate human oversight in these contexts or how this would play out in a real-time scenario.
Last year, the Army caused some controversy when it released details of its ATLAS gun that uses AI to select and engage with targets without human intervention, sparking criticism that it had greenlit an arms race to build LAWS. The DoD rewrote the description of the ATLAS program to emphasize that as of now ATLAS cannot pull the trigger without human help and that all weapons, including autonomous ones, adhere to Directive 3000.09. However, as Paul Scharre, former Army Ranger, expert on LAWS, and one of the drafters of the directive has pointed out:
“The US Defense Department policy on autonomy in weapons doesn’t say that the DoD has to keep the human in the loop. It doesn’t say that. That’s a common misconception.”
Human-in-the-loop systems require humans to select targets and decide to engage, so the weapon is autonomous in the sense that it can follow and fire on targets once a human approves. These systems retain human control over each decision made (e.g., selecting the target and also deciding if/how it will be engaged with). By contrast, for human-on-the-loop systems, a person can watch and intervene if need-be but doesn’t need to approve every one of the killer robot’s actions. The Army may explore a less hands-on approach with ATLAS in the future, and the program is going forward despite the lack of clarity surrounding whether truly autonomous weapons are being developed.
Bolstering Directive 3000.09 are five ethical principles that the DoD adopted in February to guide its development of AI and LAWS. The DoD wants its work with AI and killer robots to be:
These are, however, voluntary principles that the Pentagon adopted, and they do not carry the force of law. Thus, with little oversight into LAWS development, it is difficult to ensure that the DoD sticks to its ethical principles. It is also unclear what happens if they are caught ignoring them. In fact, the same bipartisan commission on AI that developed DoD’s ethical principles also urged the Department not to let ethical debates “paralyze AI developments,” including advancing autonomy in a range of military operations.
Despite rapid advances in LAWS and growing pressure to constrain a possible AI arms race, states and international institutions are struggling to agree how the threat should be addressed. In January, diplomats met at the UN to consider guardrails for killer robots for the sixth consecutive year, feeling more pressure than during the previous years, because the technology is “fast progressing from graphic design boards to defence engineering laboratories.” Legal experts and defense analysts paint a grim picture, saying that without a framework constraining their use, LAWS could soon “be deployed by state militaries to the battlefield, painting dystopian scenarios of swarms of drones moving through a town or city, scanning and selectively killing their targets within seconds.” Despite the growing danger, strong military powers like the US, Russia, and China are currently blocking action at the UN that would set limits on the use of LAWS. While no state has endorsed their use, these countries will not agree to preemptively disarm themselves of a potentially powerful technology like LAWS.
At the international level, no rules have been written to address LAWS or the use of other automated systems. However, all weapons of war are bound by international humanitarian law and the laws in war (jus in bello). To be legal, the use of force by a state satisfy three principles:
Some robotics experts argue that machines can be programmed to follow the laws of war and that it may in fact be more ethical to use LAWS. Unlike humans, AI doesn’t make mistakes because of fatigue, lack of sleep, or a distraction. Others, however, worry that these systems are often biased — facial recognition has repeatedly been shown to be less accurate on nonwhite people — and will be unable to reliably distinguish between civilians and soldiers. Plus, AI struggles to take into consideration new information or the context of a situation, leaving killer robots to rigidly follow their programming. Without transparency surrounding the data and algorithms as well as some degree of explainability from algorithms, it will be difficult to ensure that killer robots follow the laws of war.
Drones are the most commonly used unmanned system around the world, so a look at how the US has used drones abroad can illuminate what future policies constraining LAWS might look like. The War on Terror legal framework is the Authorization of Military Force (AUMF), first passed after 9/11 to give the President the power to pursue those responsible for grave acts of terror. Since this has been the guiding framework for the drone usage in wars, LAWS may be “regulated” under the AUMF first. Short but expansive, the AUMF empowers the President “to use all necessary and appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided the terrorist attacks that occurred on September 11, 2001.” Written in response to 9/11, the AUMF has been used to greenlight wars in Afghanistan and Iraq as well as a drone campaign that stretches across 13 countries, killing an estimated 8,000 to 9,700 civilians and 15,000 to 23,000 people total.
Anwar Al Awlaki’s case is an instructive example of how government action can create policy in the absence of a prescriptive framework. Anwar Al Awlaki was an American citizen and Islamic preacher who fled the US after federal authorities pressured him to. He joined Al-Qaeda in Africa and continued preaching radical rhetoric, earning him a place on the drone kill list. Despite being an American citizen, Awlaki was killed in a “signature strike,” a bomb dropped by a Predator drone on a “high-value target,” without his due process rights. The court considered the killing legal under AUMF despite the government admitting it had no evidence of an imminent attack and presented none at trial.
US drone use presages a troubling future in which the government reserves the right to kill anyone in a secretive, legalistic, and highly-automated process that largely operates outside the scope of the law. Like the drone program, a gap in rules or regulations specifically constraining LAWS may encourage the government to expand its power and reach while burying the evidence. The few federal courts that have looked at this issue have punted on crucial Constitutional questions presented by Obama’s drone program, particularly when this secretive apparatus was used to kill three American citizens.
At the international level, killer robots are treated no differently than other weapons of war. There are three ways of constraining their use: banning them via a treaty, limiting their spread via a nonproliferation agreement, or enacting an arms control regime akin to the New START treaty that limited nuclear weapon stockpiles of the US and Russia. Many countries and nonprofit organizations, such as the Campaign to Stop Killer Robots and Human Rights Watch, have called for a ban on LAWS. But the US, China, and Russia are dragging their feet, blocking a blanket ban and insisting that the technology be more rigorously defined before regulation is discussed. Plus, the international system lacks a reliable way to enforce a ban even if the leading powers agree to one.
What about a nonproliferation agreement, like the scientists suggested in their open letter? The model would be the nuclear nonproliferation agreements, but a key ingredient missing here is traceability. Nuclear material leaves by radioactive footprints that can be physically traced whereas autonomous weapons are powered by software that leaves digital clues behind to be followed. Also, nuclear technology requires a high level of scientific knowledge and engineering expertise and is expensive to work with, so preventing its spread is much more feasible than trying to restrict software (or even computer chips). Plus, while the Nuclear Nonproliferation Agreement strictly regulates who is allowed to have nuclear weapons, it does not address how nuclear weapons are used. Any nonproliferation agreement will find it hard to control and track technology as general-purpose and ephemeral as AI
Finally, if a ban is too restrictive and a nonproliferation agreement unenforceable, establishing a multilateral arms control regime may be the most likely option adopted at the international level. For example, the Convention on Conventional Arms is an arms control treaty adopted by the UN in the 1980s to restrict the use of conventional weapons that were excessively injurious and indiscriminate, such as land mines, booby traps, and blinding lasers. The committee has been studying whether LAWS meet the same standard, but no consensus has developed yet. An arms control agreement through the CCA could limit the number, kind, and use of killer robots. It might achieve something similar to the New START Treaty, which reduced the number of nuclear weapons in the world.
The Trump administration, however, is moving in the opposite direction. Last month, it modified the Missile Technology Control Treaty that had restricted the sale of armed drones abroad. Under the old agreement, the US could only sell armed drones to England, France, and Australia. But, in response to pressure from private contractors which argue that China was selling armed drones to allies like Saudi Arabia, the Trump administration carved out an exception in agreement for large, armed drones capable of delivering payloads of more than 500 kilograms (1,100 pounds), such as General Dynamics’ MQ-9 Reaper drone. The Predator’s bigger brother, the Reaper is “a hunter-killer drone that can carry up to four Hellfire missiles as well as laser-guided bombs and joint direct attack munitions (JDAMs).” In January, it was a Reaper that killed Iranian General Qassem Solemani in an airstrike. Like the Predator, this hunter-killer drone can fly and fight autonomously. Allies in India, Saudia Arabia, Jordan, and the United Arab Emirates, as well as countries in East Asia and Central Europe, have expressed interest in buying larger drones from the US capable of both search-and-destroy missions using laser-guided munitions and reconnaissance.
Countries readily agree that LAWS are likely to be a dangerous and destabilizing technology for any military to adopt, but no strong state is willing to preemptively tie its hands from developing a potentially transformative technology like AI to its fullest potential. The constricting logic of an arms race is slowly squeezing the chances of an international agreement to prevent the spread or use of LAWS and other automated systems. There is simply too much hard power (i.e., military power) up for grabs from weaving AI and robotics technologies into military operations for them to be constrained by the current international system. Despite the dangers and warnings emanating from experts and diplomats around the world, the global AI arms race is accelerating. Unfortunately, like U-boats in World War I or nuclear weapons in World War II, the international system is unlikely to restrain the unrestricted use of LAWS until after they commit atrocities.
Citation
For attribution in academic contexts or books, please cite this work as
Bryan McMahon, “The Rise of “Killer Robots” and the Race to Restrain Them”, Skynet Today, 2020.
BibTeX citation:
@article{mcmahon2020killerrobots,
author = {McMahon, Bryan},
title = {The Rise of “Killer Robots” and the Race to Restrain Them},
journal = {Skynet Today},
year = {2020},
howpublished = {\url{https://www.skynettoday.com/overviews/killer-robots } },
}
The original name for what’s commonly known as the AK-47. Cheap, easy to make, and reliable, the AK-47 became the weapon of choice of the Soviet Union, rebels, and terrorists everywhere with an estimated 100 million in circulation today. ↩
Harvard Law School International Human Rights Clinic and Human Rights Watch. “Losing Humanity: The Case v vAgainst Killer Robots.” Human Rights Watch. 2012. ↩
Within the EU, the United Kingdom, France, Germany, and Italy all are developing LAWS; the other countries are Sweden, Japan, India, and Israel. ↩
A recent survey conducted by the polling firm Ipsos found that public support for a ban on LAWS increased to 61% of the public polled in 26 countries from 2016 to 2018. ↩