Are AI-Powered Killer Robots Inevitable?

Military scholars warn of a “battlefield singularity,” a point at which humans can no longer keep up with the pace of conflict.
A robot hand holding a human skull
Photograph: Getty Images

In war, speed kills. The soldier who is a split second quicker on the draw may walk away from a firefight unscathed; the ship that sinks an enemy vessel first may spare itself a volley of missiles. In cases where humans can't keep up with the pace of modern conflict, machines step in. When a rocket-propelled grenade is streaking toward an armored ground vehicle, an automated system onboard the vehicle identifies the threat, tracks it, and fires a countermeasure to intercept it, all before the crew inside is even aware. Similarly, US Navy ships equipped with the Aegis combat system can switch on Auto-Special mode, which automatically swats down incoming warheads according to carefully programmed rules.

These kinds of defensive systems have been around for decades, and at least 30 countries now use them. In many ways, they're akin to the automatic braking systems in newer cars, intervening only under specific emergency conditions. But militaries, like automakers, have gradually been giving machines freer rein. In an exercise last year, the United States demonstrated how automation could be used throughout the so-called kill chain: A satellite spotted a mock enemy ship and directed a surveillance plane to fly closer to confirm the identification; the surveillance plane then passed its data to an airborne command-and-control plane, which selected a naval destroyer to carry out an attack. In this scenario, automation bought more time for officers at the end of the kill chain to make an informed decision—whether or not to fire on the enemy ship.

Militaries have a compelling reason to keep humans involved in lethal decisions. For one thing, they're a bulwark against malfunctions and flawed interpretations of data; they'll make sure, before pulling the trigger, that the automated system hasn't misidentified a friendly ship or neutral vessel. Beyond that, though, even the most advanced forms of artificial intelligence cannot understand context, apply judgment, or respond to novel situations as well as a person. Humans are better suited to getting inside the mind of an enemy commander, seeing through a feint, or knowing when to maintain the element of surprise and when to attack.

But machines are faster, and firing first can carry a huge advantage. Given this competitive pressure, it isn't a stretch to imagine a day when the only way to stay alive is to embrace a fully automated kill chain. If just one major power were to do this, others might feel compelled to follow suit, even against their better judgment. In 2016, then deputy secretary of defense Robert Work framed the conundrum in layperson's terms: “If our competitors go to Terminators,” he asked, “and it turns out the Terminators are able to make decisions faster, even if they're bad, how would we respond?”

Terminators aren't rolling off the assembly line just yet, but each new generation of weapons seems to get us closer. And while no nation has declared its intention to build fully autonomous weapons, few have forsworn them either. The risks from warfare at machine speed are far greater than just a single errant missile. Military scholars in China have hypothesized about a “battlefield singularity,” a point at which combat moves faster than human cognition. In this state of “hyperwar,” as some American strategists have dubbed it, unintended escalations could quickly spiral out of control. The 2010 “flash crash” in the stock market offers a useful parallel: Automated trading algorithms contributed to a temporary loss of nearly a trillion dollars in a single afternoon. To prevent another such calamity, financial regulators updated the circuit breakers that halt trading when prices plummet too quickly. But how do you pull the plug on a flash war?

Since the late 19th century, major military powers—whether Great Britain and Germany or the United States and the USSR—have worked together to establish regulations on all manner of modern killing machines, from exploding bullets to poison gas to nuclear weapons. Sometimes, as with anti-satellite weapons and neutron bombs, formal agreements weren't necessary; the parties simply engaged in tacit restraint. The goal, in every case, has been to mitigate the harms of war.

For now, no such consensus exists with fully autonomous weapons. Nearly 30 countries support a complete ban, but none of them is a major military power or robotics developer. At the United Nations, where autonomous weapons are a subject of annual debate, China, Russia, and the United States have all stymied efforts to enact a ban. (The US and Russia have objected outright, while China in 2018 proposed a ban that would be effectively meaningless.) One of the challenging dynamics at the UN is the tug-of-war between NGOs such as the Campaign to Stop Killer Robots, whose goal is disarmament, and militaries, which won't agree to disarm unless they can verify that their adversaries will too.

Autonomous weapons present some unique challenges to regulation. They can't be observed and quantified in quite the same way as, say, a 1.5-megaton nuclear warhead. Just what constitutes autonomy, and how much of it should be allowed? How do you distinguish an adversary's remotely piloted drone from one equipped with Terminator software? Unless security analysts can find satisfactory answers to these questions and China, Russia, and the US can decide on mutually agreeable limits, the march of automation will continue. And whichever way the major powers lead, the rest of the world will inevitably follow.


When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


PAUL SCHARRE (@paul_scharre) is a senior fellow at the Center for a New American Security and the author of Army of None: Autonomous Weapons and the Future of War.

This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Special Series: The Future of Thinking Machines