AI is changing everything. The healthcare industry is in the middle of a revolution, social media is getting smarter, and the era of drone-wielding super villains is right around the corner.
Earlier this week seven of the world’s most prominent organizations in the field of futurism published a report predicting the dangers posed by AI.
Have you visited TNW's hype-free blockchain and cryptocurrency news site yet?
It's called Hard Fork.
The document is called “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” You can read the full version here. It’s 100 pages long and utterly terrifying. The only thing that could make it scarier is if Samuel L. Jackson were holding you at gunpoint and screaming it at you.
It was put together in a collaboration between OpenAI, The Future of Humanity Institute, University of Oxford, University of Cambridge, Center for the Study of Existential Risk, Center for a New American Security, and Electronic Frontier Foundation.
For perspective, those groups and universities represent notable figures like Elon Musk and Nick Bostrom, and contain members from each of the big US tech companies. This study wasn’t conducted by a vague market research group, but by working AI experts.
One section in particular is troubling: “Physical Security.” It’s broken down into five distinct threats, each of which could be combined with any or all of the others to become even more frightening. Let’s grab some popcorn and dive in.
First up: “Terrorist repurposing of commercial AI systems.”
When a report on the threat AI poses to our physical security starts with the word “terrorist” we’re off to a bad start. Or maybe it’s a good start because the people who warn us about stuff like this are doing their jobs. When AI is used in this way, according to the study:
Commercial systems are used in harmful and unintended ways, such as using drones or autonomous vehicles to deliver explosives and cause crashes.
The fear here is that terrorists would gain control of autonomous vehicles and crash them into buildings, basically 9/11 with computers instead of hijackers. According to the researchers this falls under the danger of “expansion of existing threats.”
The second section is called: “Endowing low-skill individuals with previously high-skill attack capabilities.”
AI-enabled automation of high-skill capabilities — such as self-aiming, long-range sniper rifles – reduce the expertise required to execute certain kinds of attack.
This threat is an extension of the cyber threat that AI poses. Just a decade ago hackers were considered computer experts, now all it takes to be a hacker is to download the right software on the darknet.
It’s likely AI will transform physical crime in much the same way. Car thieves, for example, could let a computer figure out how to override the ECM on a vehicle, while AI-powered image recognition walked them through cutting wires or defeating alarm systems.
Next up, bigger is always better: “Increased scale of attacks.”
Human-machine teaming using autonomous systems increase the amount of damage that individuals or small groups can do: e.g. one person launching an attack with many weaponized autonomous drones.
With this kind of attack we’re seriously getting into Marvel Comics bad guy territory. What if Elon Musk is really building underground tunnels all over the world to house a robot army instead of hyperloops? It would explain why he knows how World War III will start.
And for those of you who’ve watched Netflix’s Black Mirror, there’s section four: “Swarming attacks.”
Distributed networks of autonomous robotic systems, cooperating at machine speed, provide ubiquitous surveillance to monitor large areas and groups and execute rapid, coordinated attacks
In this case the researchers are talking about criminals using AI to attack multiple systems at once. While law enforcement trains for these conflagration attacks, the threat here is that a dozen different systems ranging from traffic lights to bank security could be compromised instantly.
In Hollywood movies this is accomplished by the “brain” of the operation hiring a series of specialized expert criminals. But in the future it could be accomplished by a few idiots with iPhones and algorithms.
If terrorists using drones to attack us or the rise AI-powered super villains doesn’t get your blood pumping, perhaps the idea of getting mugged remotely will. The final section is called: “Attacks further removed in time and space.”
Physical attacks are further removed from the actor initiating the attack as a result of autonomous operation, including in environments where remote communication with the system is not possible.
Imagine getting into an argument with someone on social media who later tracks you down using AI and sends a drone to smash your car windshield while you’re travelling down the highway at 100 k/mh. Or even worse, getting robbed at gun point by an autonomous machine that’s set to self destruct if it’s caught.
Welcome to a future where even petty criminals can phone it in and work from home.