This article was published on November 24, 2018

The panic over lethal automated weapons systems is unfounded, and could be dangerous


The panic over lethal automated weapons systems is unfounded, and could be dangerous

Killer robots sound pretty scary. But we might be overlooking a few key aspects about their benefits, or at the very least, we’re looking at them the wrong way.

To get more specific, I’m talking about lethal autonomous weapons systems (or LAWs), which are autonomous military robots that are designed to find and engage with specific targets based on their programming. In practice, they could be used to remotely and independently search for and execute a person who poses a threat to humanity. Note that these are distinguished from remotely operated weapons (ROWs), which require the operation and control of a human being at all times.

If LAWs sound threatening to you, you’re not alone. More than 67 percent of people surveyed believed that LAWs should be banned internationally, and 85 percent believed LAWs shouldn’t be used for offensive purposes. On top of that, 60 percent would prefer to be attacked by a ROW than a LAW.

Popular experts seem to agree. The Future of Life Institute, with notable members and former members like Elon Musk, Stephen Hawking, and Nick Bostrom, has taken a strong and unwavering stance against the development or use of LAWs.

And to be fair, there are some valid concerns about the development of LAWs. These weapons could be hacked and manipulated, or could fall into the wrong hands. They could be incorrectly or irresponsibly programmed, and lead to the death of innocent people. More to the point, they could be used by dictators and warlords as unfightable tools of oppression.

But these arguments aren’t enough to counteract the potential benefits of LAWs, and buying into them could limit what we’re able to achieve as a species.

Robots don’t kill without human intervention

One of the most popular arguments against LAWs is the fear of an irrational, independent killing machine—one that would be able to slaughter human beings without mercy. This is an easy hypothetical scenario to conceptualize thanks to films like The Terminator, but in practice, AI is neither malevolent nor uncontrollable.

The critical flaw in this idea is the argument that because a LAW would operate in the field with no human oversight, that no human is able to influence its actions. In reality, responsibly developed LAWs would be designed and rigorously tested by a team of programmers, cognitive scientists, engineers, and military personnel.

Think of it this way: after a relatively short period of development, we’ve been able to create self-driving car technology that is almost universally safer than a comparable human driver. Self-driving cars are never distracted, they’re never tired, and they never intentionally violate the laws meant to protect us. That’s because they’ve been designed with foresight by teams of experts in many fields, and they’re not subject to the weaknesses of the human mind.

Apply that logic to LAWs; we would have autonomous machines capable of striking with much greater precision than even a highly trained human soldier. Adrenaline-fueled reaction shots would all but disappear, as would bad judgment calls due to fatigue, and collateral damage could potentially be minimized.

The “innocent people” factor

Some LAW opposers feel that LAWs pose an imminent and indefensible threat to the lives of innocent people around the world. But if used effectively, they would actually reduce the total fatality count—on both sides.

As previously mentioned, collateral damage could practically disappear, thanks to greater precision in military operations. We’d also be able to greatly decrease the number of soldiers on the ground, which means fewer soldier deaths in military operations. The presence of LAWs may even serve as a deterrent for warlords and dictators, making them think twice before committing war crimes (though this isn’t the strongest argument here).

Autonomous killing machines already exist

Let’s also acknowledge the fact that, like it or not, autonomous weapons already exist. Take the close-in weapon system (CIWS), which is capable of firing thousands of rounds of ammunition per minute—completely autonomously—to defend against anti-ship missiles. These systems, when used autonomously, are able to independently perform search, detection, threat assessment, targeting, and target destruction processes—yet people seem far less afraid of these than they are with hypothetical LAWs.

Why is this the case? One argument is that CIWSs are meant to target missiles, not human beings. But if we trust this system to only track and destroy what it was intended to track and destroy, why can’t we apply that same logic to LAWs? After all, a CIWS could be subject to hacks the way LAWs are, and they could just as easily fall into the “wrong hands.” If we’re going to crack down on AI-driven military tools, we have to consider all of them.

The AI arms race

One of the most important arguments calling for the cessation of development of LAWs is a sensible one. It warns of the possibility of an AI arms race, not unlike the nuclear arms races of decades past. The idea is that once a country unlocks the capability of developing LAWs, other countries will be motivated to develop LAWs that are superior, often sacrificing quality standards and neglecting the importance of testing in order to get to market faster. Cut enough corners, and you’ll end up with a world full of nations and individuals who have access to shoddily controlled, yet unbelievably powerful autonomous weapons. And yes, that would be a nightmare.

The problem with this argument is that the AI arms race has already begun. And it doesn’t matter if you enact an international ban on the development of LAWs; bad actors will be likely to develop them how and when they want, no matter how much you try to control them. If and when a country develops LAWs, they should do so responsibly; hastening the process by cutting corners or avoiding the development of LAWs altogether only puts you at a disadvantage.

As a society, we need to take the lethality and power of LAWs seriously, but that doesn’t mean we should try to stifle their development entirely or jump to worst-case scenario arguments fueled by decades of dystopian fiction claiming AI is going to destroy us all. We need to look at LAWs rationally, considering both their benefits and drawbacks, and find a way to develop them responsibly.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top