BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

‘If Human, Kill’: Video Warns Of Need For Legal Controls On Killer Robots

Following
This article is more than 2 years old.

A new video released by nonprofit The Future of Life Institute (FLI) highlights the risks posed by autonomous weapons or ‘killer robots’ – and the steps we can take to prevent them from being used. It even has Elon Musk scared.

Its original Slaughterbots video, released in 2017, was a short Black Mirror-style narrative showing how small quadcopters equipped with artificial intelligence and explosive warheads could become weapons of mass destruction. Initially developed for the military, the Slaughterbots end up being used by terrorists and criminals. As Professor Stuart Russell points out at the end of the video, all the technologies depicted already existed, but had not been put together.

Now the technologies have been put together, and lethal autonomous drones able to locate and attack targets without human supervision may already have been used in Libya.


The new video, Slaughterbots - if human: kill(), brings slaughterbots bang up to date with stories from the headlines, jump-cutting fictional incidents based on new technology. There’s an autonomous weapon in parked car shooting voters at a polling station, similar to the weapon reportedly used to assassinate an Iranian nuclear scientist last year. Then it’s a bank heist carried out by quadruped robots armed with assault rifles like the (non-autonomous) recent robot dog with a sniper rifle, aircraft on the ground hit by a drone similar to real-life events in Saudi Arabia, and a nightclub attacked by explosive-laden quadcopters like the UAE-developed versions shown at a recent arms fair.

In the video, fictional developers and military leaders argue that this is purely military technology which offers the prospect of regime change without body bags, and there is no risk of it falling into the wrong hands. Needless to say, the question over which states can be trusted with autonomous weapons and how they can be prevented from reaching militant groups goes unanswered – leading to the massacres described above.

FLI argue that the crucial step on preventing this kind of nightmare a policy prescription put forward by the International Committee of the Red Cross (ICRC). This advises an international, legally binding prohibition on autonomous weapons which use artificial intelligence to identify, select, and kill people without human intervention.

In particular, they single out the U.N.’s Convention on Certain Conventional Weapons – Sixth Review Conference taking place later this month as the key time for states to act. New Zealand has just announced it will push for a ban; the U.S., Russia and China have bene less willing to take a strong position.

Part of the argument is that nations are worried that is they do not develop such weapons, others will. Professor Max Tegmark, Co-Founder of FLI and AI researcher at MIT, says this is not a valid argument. He compares slaughterbots to chemical and biological weapons which have been (largely successfully) outlawed, indiscriminate weapons which are also cheap and deadly. 

“Bioweapons are also really easy to make, but a powerful combination of stigma and controls have successfully prevented their widespread use,” Tegmark told me. “It's not in the national security interest of the U.S., the U.K. or China for W.M.D.'s to be so cheap that all our adversaries can afford them.”

Many non-state groups already make extensive use of armed drones, including Mexican drug cartels, ISIS, and Houthi rebels in Yemen. Tegmark believes that international laws could also prevent such actors from getting hold of the more advanced technology to make swarming, autonomous killing machines.

“A slaughterbot ban would incentivize legitimate drone manufacturers etc. to vet their large customers, just as many companies do today with export-controlled technology,” says Tegmark.

Arthur Holland Michel, an associate researcher for the United Nations Institute for Disarmament Research, told me that he welcomes anything which brings mainstream attention to the issues around autonomous weapons.

Michel is concerned, though, that both this video and the original Slaughterbots portrayed autonomous weapons as accurate and precise, when reality they are still very crude and unreliable.

People might actually welcome AI-guided drones with the sort of precision of those in the video, which easily distinguish men and women from children and armed from unarmed individuals — seemingly superior to current human-direct drone strikes when it comes to avoiding civilian casualties. But Michel says that whether robots will ever be as good as the fictional machines is still very much up for debate. He suggests autonomous weapons may well be too unreliable ever to be deployed even under existing laws of war.

Legal controls on killer robots can only help, although much of the technology is already in the public domain or in the hands of tech giants.

So perhaps the last word should go to Elon Musk, whose one word Tweet in response to the new video was: “Yikes.”

Follow me on TwitterCheck out my website or some of my other work here