
When a panel of renowned AI experts was asked whether it would be possible for machines to develop superintelligence the answer was unanimous: yes. It seems like thereâs no longer a debate on whether computers will become more intelligent than humans, only when.
The panel, held earlier this year in California, was comprised of a âwhoâs whoâ of science and philosophy in the AI space:
- Bart Selman (Cornell)
- David Chalmers (NYU)
- Elon Musk (Tesla, SpaceX)
- Jaan Tallinn (CSER/FLI)
- Nick Bostrom (FHI)
- Ray Kurzweil (Google)
- Stuart Russell (Berkeley)
- Sam Harris
- Demis Hassabis (DeepMind)
While itâs theoretically possible, given what we understand about the laws of physics, for a computer to surpass human intelligence to the point in which the term âsuperintelligenceâ becomes applicable, surely the odds of that happening have to be slim to none. Right?
Itâs actually likely, according to every member of that panel. When asked to answer the question âis it likely AI will reach superintelligenceâ with a yes, no, or âitâs complicatedâ each member responded âyes.â
Elon Musk even pretended to disagree with the other panelists, much to the delight of the audience and a few of the brains on stage. Perhaps because thereâs an awkwardness that happens when extremely intelligent people are entirely on the same page; itâs hard to do science without debate.
Superintelligence?
If weâre reconciled to the idea AI will reach âsuperintelligenceâ itâs time we understood what that means and when itâs going to happen.
Professor Nick Bostrom is the guy who literally wrote the book on superintelligence. In his text, aptly named âSuperintelligence,â he defines the concept as âa hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.â
Itâll be smarter than we are, which means we wonât be able to understand it then â and we certainly canât now.
The obvious truth about AI is that no one can predict whatâs going to happen in the long term. Itâs arguable that machine-learning advances are occurring so quickly it may be naive to think we know whatâs going to happen in the next six months.
Weâre in uncharted territory.
But whatâs any of this got to do with Stephen Hawking?
Professor Hawking, in a recent interview in Wired magazine, says AI is a technological revolution:
Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one â industrialization. And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI, could be the biggest event in the history of our civilization.
He isnât the only one who believes AI will change our lives forever. Professor Nick Bostrom says AI represents the third âfundamental change in the human condition.â
The revolution
The first, according to Bostrom, was the agricultural revolution, which was followed by the industrial revolution. If AI exceeds human intelligence, weâll reach the next technological revolution.
It can be very difficult to reckon the âfactâ that AI is going to have an impact as big as the industrial revolution â especially since most of the headlines read like science-fiction horror.
Professor Hawking, in the same interview, also said:
I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.
Professor Hawking probably wasnât suggesting that we cease researching AI and become Luddites.
Itâs simply easier to wrap our heads around the idea of killer robots taking over the world. Weâve seen that movie a dozen times already. Futhermore, a vision of the future is always easier to see through a lens of destruction, otherwise itâs distorted by reality.
The world of tomorrow, that cinema and science-fiction have painted, is typically one that looks like today, but with cooler gadgets and adolescent-quality slang. Trying to explain what a world thatâs been revolutionized by artificial intelligence will really look like is a difficult endeavor.
Professor Bostrom, earlier this month, told memberâs of UK Parliamentâs artificial intelligence committee that thereâs simply no way to do that:
As with any new general purpose technology it might very well be that the most exciting applications are not obvious at the outset, and are only discovered as people can start to play with the technology.
Headlines â like the one above this article â that tell us Stephen Hawking and Elon Musk think AI could destroy the human race are the fast-food version of the actual conversations, quite often.
The real calories lie in the excitement and longing optimism that Musk and Hawking both dish out concerning machine-learning technology. Professor Hawking believes AI could cure disease. Musk is heavily involved in Open AI, a foundation whose mission states:
Artificial general intelligence (AGI) will be the most significant technology ever created by humans.
Hyperbole and fear will make naysayers of us all, if we only let them. The more practical approach â the one even Elon âAI Will Start WWIIIâ Musk seems to be taking â is to move forward with caution, optimism, and the best interests of all humankind in mind.
Get the TNW newsletter
Get the most important tech news in your inbox each week.