Tristan GreeneEditor, Neural by TNW
Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him
It. Never. Fails. Every time an AI article finds its way to social media there’s hundreds of people invoking the terrifying specter of “SKYNET.”
SKYNET is a fictional artificial general intelligence that’s responsible for the creation of the killer robots from the Terminator film franchise. It was a scary vision of AI’s future until deep learning came along and big tech decided to take off its metaphorical belt and really give us something to cry about.
At least the people fighting the robots in The Terminator film franchises get to face a villain they can see and shoot at. In real life, you can’t punch an algorithm.
And that makes it difficult to explain why, based on what’s happening now, the real future might be even scarier than the one from those killer robot movies.
Luckily, we have experts such as Kai Fu Lee and Chen Qiufan, whose new book, AI 2041: Ten Visions of our Future, takes a stab at predicting what the machines will do over the next two decades. And, based on this interview, there’s some scary shit headed our way.
According to Lee and Qiufan, the biggest threats humans face when it comes to AI involve its influence, lack of accountability or explainability, its inherent and explicit bias, its use as a bludgeon against privacy, and, yes, killer robots – but not the kind you’re thinking of.
If we’re going to prioritize a list of existential threats to the human race, we should probably start with the worst of them all: social media.
Facebook’s very existence is a danger to humanity. It represents a business entity with more power than the governing body of the nation in which it’s incorporated.
The US government has taken no meaningful steps to regulate Facebook’s use of AI. And, for that reason, billions of humans across the planet are exposed to demonstrably harmful recommendation algorithms every day.
Facebook’s AI has more influence over humankind than any other force in history. The social network has more active monthly users than Christianity.
It would be shortsighted to think decades of exposure to social networks, despite hundreds of thousands of studies warning us about the real harms, won’t have a major impact on our species.
Whether in 10, 20, or 50 years, the evidence seems to indicate we’ll live to regret turning our attention spans over to a mathematical entity that’s dumber than a snail.
The next threat on our tour-de-AI-horrors is the fascinating world of anti-privacy technology and the nightmare dystopia we’re headed for as a species.
Amazon’s Ring is the perfect reminder that, for whatever reason, humankind is deeply invested in shooting itself in the foot at every possible opportunity.
If there’s one thing almost every free nation on the planet agrees on, it’s that human beings deserve a modicum of privacy.
Ring doorbell cameras destroy that privacy and effectively give both the government and a trillion-dollar corporation a neighbor’s eye-view of everything that’s happening in every neighborhood around the country.
The only thing stopping Amazon or the US government from exploiting the data in the buckets where all that Ring video footage is stored is their word.
If it ever becomes lucrative to use our data or sell it. Or a political shift gives the US government powers to invade our privacy that it didn’t previously have, our data is no longer safe.
But it’s not just Amazon. Our cars will soon be equipped with cloud-connected cameras purported to watch drivers for safety reasons. We already have active microphones listening in all of our smart devices.
And we’re on the very cusp of mainstreaming brain-computer-interfaces. The path to wearables that send data directly from your brain to big tech’s servers is paved with good intentions and horrible AI.
The next generation of surveillance tech, wearables, and AI-companions might eradicate the idea of personal privacy all-together.
The difference between being the first result of a Google search or ending up at the bottom of the page can cost businesses millions of dollars. Search engines and social media feed aggregators can kill a business or sink a news story.
And nobody voted to give Google or any other company’s search algorithms that kind of power, it just happened.
Now, Google’s bias is our bias. Amazon’s bias determines which products we buy. Microsoft and Apple’s bias determine what news we read.
Our doctors, politicians, judges, and teachers use Google, Apple, and Microsoft search engines to conduct personal and professional business. And the inherent biases of each product dictate what they do and do not see.
Social media feeds often determine not just which news articles we read, but which news publishers we’re exposed to. Almost every facet of modern life is somehow promulgated via algorithmic bias.
In another 20 years, information could become so stratified that “alternative facts” no longer refer to those that diverge from reality, but those that don’t reflect the collective truth our algorithms have decided on for us.
Blaming the algorithms
AI doesn’t have to actually do anything to harm humans. All it has to do is exist and continue to be confusing to the mainstream. As long as developers can get away with passing off black box AI as a way to automate human decision-making, bigotry and discrimination will have a home in which to thrive.
There are certain situations where we don’t need AI to explain itself. But when an AI is tasked with making a subjective decision, especially one that affects humans, it’s important we be able to know why it makes the choices it does.
It’s a big problem when, for example, YouTube’s algorithm surfaces adult content to children’s accounts because the developers responsible for creating and maintaining those algorithms have no clue why it happens.
But what if there isn’t a better way to use black box AI? We’ve painted ourselves into a corner – almost every public-facing big tech enterprise is powered by black box AI, and almost all of it is harmful. But getting rid of it may prove even harder than extricating humanity from its dependence on fossil fuels – and for the same reasons.
In the next 20 years, we can expect the lack of explainability intrinsic to black box AI to lie at the center of any number of potential catastrophes involving artificial intelligence and loss of human life.
The final and perhaps least dangerous (but most obvious) threat to our species as a whole is that of killer drones. Note, that’s not the same thing as killer robots.
There’s a reason why even the US military, with its vast budget, doesn’t have killer robots. And it’s because they’re pointless when you can just automate a tank or mount a rifle on a drone.
The real killer robot threat is that of terrorists gaining access to simple algorithms, simple drones, simple guns, and advanced drone-swarm control technology.
Perhaps the best perspective comes from Lee who, in a recent interview with Andy Serwer, said:
It changes the future of warfare because, between country and country, this can create havoc and damage, but perhaps, anonymously and people don’t know who did the attack.
So it’s also quite different from nuclear arms race, where [the] nuclear arms race at least has deterrence built-in. That you don’t attack someone for the fear of retaliation and annihilation.
But autonomous weapons might be doable as a surprise attack. And people might not even know who did it. So I think that is, from my perspective, the ultimate greatest danger that I can be a part of. And we need to be cautious and figure out how to ban or regulate it.
Get the TNW newsletter
Get the most important tech news in your inbox each week.