Every few decades, a technological development leads us to believe that artificial general intelligence (aka strong AI) , the brand of AI that can think and decide like humans, is just around the corner. The excitement that follows is accompanied by fears of dystopian near-future and an arms-race between companies and states to be the first to create general AI.
However, every time we thought we were closing in on strong AI, we have been disappointed. Badly. Every time, we spent a lot of time, resources, money and the energy of our most brilliant scientists on accomplishing something that seems to be a pipe dream. And every time, what ensued was a period of disappointment and disinterest in the field, which lasted decades.
We are currently in the full heat of one such cycle, thanks to machine learning and deep learning, the technologies that have been at the heart of AI developments in recent years. Spurred (and blinded) by the capabilities of deep learning, and how it has so far defied the norms set by traditional software, many organizations and visionaries (with deep pockets) are thinking once again that strong AI is on the horizon and want to catch it before others do (sigh of exasperation).
A recent example of how this hype manifests itself is the declaration by OpenAI to create machines with real intelligence. In case you didn’t know, OpenAI is a nonprofit launched in late 2015 with more than $1 billion in funding from the likes of Elon Musk and Y Combinator’s CEO Sam Altman. The organization has managed to acquire the rare AI talent and pay them six- and seven-figure salaries to engage in projects like the one just mentioned.
But while all this talent focuses on finding a way to create strong AI that can compete with the human brain, we’re missing out on plenty of the opportunities and failing to address the threats that current weak AI technology presents.
Narrow AI is where the opportunities are
In contrast to strong AI, which can learn to do any task a human does, weak AI (or narrow AI) is limited to one or few specific tasks. This is the kind of AI that we currently have. In fact, deep learning, which is named after (and often compared to) the human brain, is very limited in its capabilities and is nowhere near to performing the kind of tasks that the mind of a human child can perform.
And that’s not a bad thing. In fact, narrow AI can focus on specific tasks and do them much better than humans can. For instance, feed a deep learning algorithm with enough pictures of skin cancer, and it will become better than experienced doctors in spotting skin cancer.
This doesn’t mean that deep learning will replace doctors. You need intuition, abstract thinking, and a lot more skills to be able to decide what’s best for a patient. But the deep learning algorithms will surely help doctors perform their jobs better, faster and tend to more patients in a shorter amount of time. It will also cut down the time it takes to educate and train professionals in the healthcare industry.
Another example is education, where current AI algorithms can help teachers find and address pain points in students’ learning or spot problems in their own curriculum in new ways. These AI algorithms are still too far from acquiring the complicated social skills that teachers need to help students find their way toward knowledge, but they can surely help teachers become better at their craft.
Put another way, weak AI can automate the boring, repetitive parts of most jobs and let the humans take care of the parts that require human care and attention. A very interesting example is customer service, where thanks to natural language processing and generation, AI-powered chatbots can take care of the simple and trivial queries that most customers have, letting human employees tend to the more complicated cases. This results in less customer frustration, shorter wait times, and better use of employees times.
Under the current nomenclature, artificial intelligence has become known as a mysterious technology that will eventually outperform and eliminate humans in all domains, mostly because of the fears surrounding the unknown capabilities of strong AI. But as the above examples (and a lot more) show, that does not apply to narrow AI.
A better definition of weak AI is “augmented intelligence,” which makes it clear that the technology is here to complement the human mind, not replace it. If our experts could only focus on creating tools that would enable us to make the best use of narrow AI and make it accessible to more organizations and people, we would certainly be in a better position.
Narrow AI is where the threats lie
Many visionaries are warning against a distant future where AI becomes too smart and too strong and drives humans into slavery or extinction. But even the most optimistic (or pessimistic, depending on your perspective) estimates put general AI at least decades away.
What’s imminent, however, is the clear and present danger of narrow AI. There are already many ways weak AI can be used for evil purposes, and we don’t need to wait for decades before it happens.
Thanks to advances in deep learning, creating doctored photos and videos has become very easy and accessible for almost anyone with a decent computer and an internet connection. This availability of very efficient creative tools can lead to a new era of cybercrime, where forgery and fraud no longer needs a lot of money and access to highly talented people.
Another example is AI-powered facial recognition technology, which is becoming increasingly efficient and available on various devices thanks to advances in edge computing. In the wrong hands, facial recognition technology can become a privacy nightmare. Already, several governments such as China are leveraging the technology to establish an invasive state of surveillance and keep strict tabs on their citizens, especially dissidents.
Other ways that narrow AI is already causing damage is algorithmic bias, AI algorithms that blindly imitate and amplify human prejudice, and social media filter bubbles, AI-powered news feeds that only show us what we want to see rather than what we should see. The latter has already shown its destructive powers during elections in recent years when bad actors gamed AI algorithms running behind Facebook to manipulate its users with micro-targeted political ads.
Very soon, AI will be able to hurt people in more direct ways. In 2016, Defense Advanced Research Projects Agency (DARPA) ran a proof-of-concept Capture the Flag competition called Cyber Grand Challenge in which AI algorithms—and not human hackers—competed to find and exploit vulnerabilities in each other’s networks. The competition showed a glimpse of what the fast-paced cyberwars of the future might look like. And you don’t need strong AI for that.
And the threat of autonomous weapons is drawing close. This isn’t a Terminator-style robot that can speak, think and kill like a cold-blooded human killer. These are contemporary drones, firearms, and tanks that use contemporary narrow AI technology such as computer vision and voice recognition to identify, follow and hit targets.
If only our scarce and coveted experts spent more time on finding solutions to these real and immediate problems instead of chasing strong AI dreams (sigh of exasperation).
Do we even need strong AI?
The human brain can do wonderful things and it is perhaps the most complicated and capable creation that we’ll ever see. But it is also filled with flaws. It is slow at processing information and, compared to computers, it is very slow at learning. We forget facts and we make wrong assumptions based on vague knowledge. Why would we want to go through the pains to reproduce all those flaws in our AI algorithms?
Peter Norvig, Director of Research at Google, says, we should focus on tools that can help us rather than duplicating what we already know how to do. “We want humans and machines to partner and do something that they cannot do on their own,” he says.
He’s right. And that’s exactly what weak AI is for.