This article was published on April 16, 2018

Drones will soon decide who to kill


Drones will soon decide who to kill

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI). This is a big step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement.

Once complete, these drones will represent the ultimate militarization of AI and trigger vast legal and ethical implications for wider society. There is a chance that warfare will move from fighting to extermination, losing any semblance of humanity in the process. At the same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.

Ultimately, the crew has the final ethical, legal, and operational responsibility for killing designated human targets. As one Reaper operator states: “I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians.”

An MQ-9 Reaper Pilot. US Air Force

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Even with these drone killings, human emotions, judgments, and ethics have always remained at the center of war. The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided. The weakness in this argument is that you don’t have to be responsible for killing to be traumatized by it. Intelligence specialists and other military personnel regularly analyze graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

An MQ-9 Reaper. US Air Force

When I interviewed over 100 Reaper crew members for an upcoming book, every person I spoke to who conducted lethal drone strikes believed that, ultimately, it should be a human who pulls the final trigger. Take out the human and you also take out the humanity of the decision to kill.

Grave consequences

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident. Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.

Google’s New York headquarters. Scott Roy Atwood, CC BY-SA

Ethically, there are even darker issues still.

The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that technology uses is that they become better at whatever task they are given. If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed. In militarized machine learning, that means political, military, and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning. Uber and Tesla’s fatal experiments with self-driving cars suggest it is pretty much guaranteed that there will be unintended autonomous drone deaths as computer bugs are ironed out.

If machines are left to decide who dies, especially on a grand scale, then what we are witnessing is extermination. Any government or military that unleashed such forces would violate whatever values it claimed to be defending. In comparison, a drone pilot wrestling with a “kill or no kill” decision becomes the last vestige of humanity in the often inhuman business of war.

This article was amended to clarify that Uber and Tesla have both undertaken fatal experiments with self-driving cars, rather than Uber experimenting with a Tesla car as originally stated.

Peter Lee, Director, Security and Risk & Reader in Politics and Ethics, University of Portsmouth

This article was originally published on The Conversation. Read the original article.
The Conversation

Get the TNW newsletter

Get the most important tech news in your inbox each week.