Autonomous Weapons Are Here, but the World Isn’t Ready for Them

A UN report says a drone, operating without human control, attacked people in Libya. International efforts to restrict such weapons have so far failed. 
A collage with a Bayraktar TB2 UCAV and art by Jenny Sharaf
Illustration: Jenny Sharaf; Getty Images

This may be remembered as the year when the world learned that lethal autonomous weapons had moved from a futuristic worry to a battlefield reality. It’s also the year when policymakers failed to agree on what to do about it.

On Friday, 120 countries participating in the United Nations’ Convention on Certain Conventional Weapons could not agree on whether to limit the development or use of lethal autonomous weapons. Instead, they pledged to continue and “intensify” discussions.

“It's very disappointing, and a real missed opportunity,” says Neil Davison, senior scientific and policy adviser at the International Committee of the Red Cross, a humanitarian organization based in Geneva.

The failure to reach agreement came roughly nine months after the UN reported that a lethal autonomous weapon had been used for the first time in armed conflict, in the Libyan civil war.

In recent years, more weapon systems have incorporated elements of autonomy. Some missiles can, for example, fly without specific instructions within a given area; but they still generally rely on a person to launch an attack. And most governments say that, for now at least, they plan to keep a human “in the loop” when using such technology.

But advances in artificial intelligence algorithms, sensors, and electronics have made it easier to build more sophisticated autonomous systems, raising the prospect of machines that can decide on their own when to use lethal force.

A growing list of countries, including Brazil, South Africa, New Zealand, and Switzerland, argue that lethal autonomous weapons should be restricted by treaty, as chemical and biological weapons and land mines have been. Germany and France support restrictions on certain kinds of autonomous weapons, including potentially those that target humans. China supports an extremely narrow set of restrictions.

Other nations, including the US, Russia, India, the UK, and Australia, object to a ban on lethal autonomous weapons, arguing that they need to develop the technology to avoid being placed at a strategic disadvantage.

Killer robots have long captured the public imagination, inspiring both beloved sci-fi characters and dystopian visions of the future. A recent renaissance in AI, and the creation of new types of computer programs capable of out-thinking humans in certain realms, has prompted some of tech’s biggest names to warn about the existential threat posed by smarter machines.

The issue became more pressing this year, after the UN report, which said a Turkish-made drone known as Kargu-2 was used in Libya’s civil war in 2020. Forces aligned with the Government of National Accord reportedly launched drones against troops supporting Libyan National Army leader General Khalifa Haftar that targeted and attacked people independently.

“Logistics convoys and retreating Haftar-affiliated forces were … hunted down and remotely engaged by the unmanned combat aerial vehicles,” the report states. The systems “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

The news reflects the speed at which autonomy technology is improving. “The technology is developing much faster than the military-political discussion,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, an organization dedicated to addressing existential risks facing humanity. “And we're heading, by default, to the worst possible outcome.”

Tegmark is among a growing number of technologists concerned about the proliferation of AI weapons. The Future of Life Institute has produced two short films to raise awareness of the risks posed by so-called “slaughterbots.” The most recent of these, released in November, focuses on the potential for autonomous drones to carry out targeted assassinations.

“There's a rising tide against the proliferation of slaughterbots,” Tegmark says. “We are not saying ban all military AI but just ‘if human, then kill.’ So, ban weapons that target humans.”

One challenge with prohibiting, or policing, use of autonomous weapons is the difficulty of knowing when they’ve been used. The company behind the Kargu-2 drone, STM, has not confirmed that it can target and fire on people without human control. The company’s website now refers to a human controller making decisions about use of lethal force. “Precision strike mission is fully performed by the operator, in line with the Man-in-the-Loop principle,” it reads. But a cached version of the site from June contains no such caveat. STM did not respond to a request for comment.

“We are entering a gray area where we're not going to really know how autonomous a drone was when it was used in an attack,” says Paul Scharre, vice president and director of studies at the Center for New American Security and the author of Army of None: Autonomous Weapons and the Future of War. “That raises some really difficult questions about accountability.”

Another example of this ambiguity appeared in September with reports of Israel using an AI-assisted weapon to assassinate a prominent Iranian nuclear scientist. According to an investigation by The New York Times, a remotely operated machine gun used a form of facial recognition and autonomy, but it’s unclear whether the weapon was capable of operating without human approval.

The uncertainty is “exacerbated by the fact that many companies use the word autonomy when they’re hyping up the capabilities of their technology,” Scharre says. Other recent drone attacks suggest that the underlying technologies are advancing quickly.

In the US, the Defense Advanced Research Projects Agency has been conducting experiments involving large numbers of drones and ground vehicles that collaborate in ways that are challenging for human operators to monitor and control. The US Air Force is also investigating ways that AI could assist or replace fighter pilots, holding a series of dogfights between human pilots and AI ones.

Even if there were a treaty restricting autonomous weapons, Scharre says “there is asymmetry between democracies and authoritarian governments in terms of compliance.” Adversaries such as Russia and China might agree to limit the development of autonomous weapons but continue working on them without the same accountability.

Some argue that this means AI weapons need to be developed, if only as defensive measures against the speed and complexity with which autonomous systems can operate.

A Pentagon official told a conference at the US Military Academy in April that it may be necessary to consider removing humans from the chain of command in situations where they cannot respond rapidly enough.

The potential for adversaries to gain an edge is clearly a major concern for military planners. In 2034: A Novel of the Next World War, which was excerpted in WIRED, the writer Elliot Ackerman and US Admiral James Stavridis imagine “a massive cyberattack against the United States—that our opponents will refine cyber stealth and artificial intelligence in a kind of a witch's brew and then use it against us.”

Despite previous controversies over military use of AI, US tech companies continue to help the Pentagon hone its AI skills. The National Security Commission on AI, a group charged with reviewing the strategic potential of AI that included representatives from Google, Microsoft, Amazon, and Oracle, recommended investing heavily in AI.

Davison, who has been involved with the UN discussions, says technology is outpacing the policy debate. “Governments really need to take concrete steps to adopt new rules,” he adds.

He still holds out hope that countries will agree on some restrictions, even if it happens outside of the UN. He says countries’ actions suggest that they disapprove of autonomous weapons. “What's quite interesting is that the allegations of the use of autonomous weapons to target people directly tend to be refuted by those involved, whether militaries or governments or manufacturers,” he says.


More Great WIRED Stories