When a panel of renowned AI experts was asked whether it would be possible for machines to develop superintelligence the answer was unanimous: yes. It seems like there’s no longer a debate on whether computers will become more intelligent than humans, only when.
The panel, held earlier this year in California, was comprised of a “who’s who” of science and philosophy in the AI space:
- Bart Selman (Cornell)
- David Chalmers (NYU)
- Elon Musk (Tesla, SpaceX)
- Jaan Tallinn (CSER/FLI)
- Nick Bostrom (FHI)
- Ray Kurzweil (Google)
- Stuart Russell (Berkeley)
- Sam Harris
- Demis Hassabis (DeepMind)
While it’s theoretically possible, given what we understand about the laws of physics, for a computer to surpass human intelligence to the point in which the term “superintelligence” becomes applicable, surely the odds of that happening have to be slim to none. Right?
It’s actually likely, according to every member of that panel. When asked to answer the question “is it likely AI will reach superintelligence” with a yes, no, or “it’s complicated” each member responded “yes.”
Elon Musk even pretended to disagree with the other panelists, much to the delight of the audience and a few of the brains on stage. Perhaps because there’s an awkwardness that happens when extremely intelligent people are entirely on the same page; it’s hard to do science without debate.
If we’re reconciled to the idea AI will reach “superintelligence” it’s time we understood what that means and when it’s going to happen.
Professor Nick Bostrom is the guy who literally wrote the book on superintelligence. In his text, aptly named “Superintelligence,” he defines the concept as “a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.”
It’ll be smarter than we are, which means we won’t be able to understand it then — and we certainly can’t now.
The obvious truth about AI is that no one can predict what’s going to happen in the long term. It’s arguable that machine-learning advances are occurring so quickly it may be naive to think we know what’s going to happen in the next six months.
We’re in uncharted territory.
But what’s any of this got to do with Stephen Hawking?
Professor Hawking, in a recent interview in Wired magazine, says AI is a technological revolution:
Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialization. And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI, could be the biggest event in the history of our civilization.
He isn’t the only one who believes AI will change our lives forever. Professor Nick Bostrom says AI represents the third “fundamental change in the human condition.”
The first, according to Bostrom, was the agricultural revolution, which was followed by the industrial revolution. If AI exceeds human intelligence, we’ll reach the next technological revolution.
It can be very difficult to reckon the “fact” that AI is going to have an impact as big as the industrial revolution – especially since most of the headlines read like science-fiction horror.
Professor Hawking, in the same interview, also said:
I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.
Professor Hawking probably wasn’t suggesting that we cease researching AI and become Luddites.
It’s simply easier to wrap our heads around the idea of killer robots taking over the world. We’ve seen that movie a dozen times already. Futhermore, a vision of the future is always easier to see through a lens of destruction, otherwise it’s distorted by reality.
The world of tomorrow, that cinema and science-fiction have painted, is typically one that looks like today, but with cooler gadgets and adolescent-quality slang. Trying to explain what a world that’s been revolutionized by artificial intelligence will really look like is a difficult endeavor.
Professor Bostrom, earlier this month, told member’s of UK Parliament’s artificial intelligence committee that there’s simply no way to do that:
As with any new general purpose technology it might very well be that the most exciting applications are not obvious at the outset, and are only discovered as people can start to play with the technology.
Headlines — like the one above this article — that tell us Stephen Hawking and Elon Musk think AI could destroy the human race are the fast-food version of the actual conversations, quite often.
The real calories lie in the excitement and longing optimism that Musk and Hawking both dish out concerning machine-learning technology. Professor Hawking believes AI could cure disease. Musk is heavily involved in Open AI, a foundation whose mission states:
Artificial general intelligence (AGI) will be the most significant technology ever created by humans.
Hyperbole and fear will make naysayers of us all, if we only let them. The more practical approach – the one even Elon “AI Will Start WWIII” Musk seems to be taking — is to move forward with caution, optimism, and the best interests of all humankind in mind.