Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on February 1, 2016

Just because robots can’t enslave us today doesn’t mean A.I. will be safe forever


Just because robots can’t enslave us today doesn’t mean A.I. will be safe forever

Every once in a while, an op-ed gets published that is so arrogantly out-of-touch with reality that it might have been written by somebody with the intelligence of a mouse.

We’ve got nothing – absolutely nothing – to fear from intelligent machines, Luciano Floridi tries to reassure us in his recent Financial Times column. Floridi, a professor of philosophy and ethics of information at Oxford University, denies that computers are an “existential threat” to humanity.

Computers, Floridi argues absolutely, have “no understanding, no consciousness, no intuitions.” Computers don’t possess autonomous mental life, he insists. They lack “the intelligence of a mouse.”

And so “in the final analysis,” Floridi concludes about our age of smart machines, “humans, and not smart machines, are the problem.”

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Okay. So Floridi isn’t, exactly, a mouse. But he is an academic philosopher so flagrantly disconnected from what’s happening in the real world that it may be him – rather than smart machines – lacking intelligence.

So what about those iconic scientists whose viewpoints Floridi so casually dismisses – Elon Musk and Stephen Hawking? Both have made significant contributions to science, and both have spoken publicly on the potential dangers of AI, warning that computers can, indeed, learn to think for themselves. In 2014, Hawking went as far to warn that the development of full artificial intelligence – with its ability to enslave mankind – is a threat to the future of humanity.

But, of course, Floridi is far too glib to believe that machines can develop their own consciousness. “The same is true of the appearance of the Four Horseman of the Apocalypse,” he thus dismisses Hawking’s fear that full artificial intelligence might mark the end of the human race.

Such hubris is staggering. Both Musk and Hawking are smart enough to know what they don’t know. They fear that technology might enable the smart machine to think for itself. This doesn’t translate to Luddism, fear-mongering or even being anti-innovation; in fact in December, Musk co-founded OpenAI, a nonprofit AI research company with the stated goal of “[advancing] digital intelligence in the way that is most likely to benefit humanity as a whole.”

No, neither Hawking nor Musk are feverish hand-wringers. Nor are Bill Gates, Eric Schmidt or the other attendees of the recent World Economic Forum at Davos who went to sessions with such ominous titles as ‘Life in 2030: Humankind and the Machine’, ‘Infusing Emotional Intelligence Into AI’ and ‘What If: Robots Go to War?’

“I care about my children”

Some of the leading figures in the world of science and technology have made the very reasonable statement that certain aspects of AI may bring about unintended consequences at some point in the future. Jan Tallinn, the Estonian co-founder of Skype and Kazaa, cares so much about this threat that he’s even co-founded The Centre for the Study of Existential Risk at Cambridge University to study it.

Why do you care?” I asked Tallinn when we met last month in Estonia.

“I care about my children,” he confided.

So why, exactly, does Floridi find all these fears of all these experts – from Hawking and Musk to Gates, Schmidt and Tallinn – so immaterial?

The Oxford philosopher appears to reject all these fears of AI because of the failure of computers to pass the famous Turing test.
He uses as proof an illogical answer given by one computer during the 2014 Loebner Prize in Artificial Intelligence competition (not the 2015 competition which he mistakenly cites). From this, Floridi extrapolates that “anxieties about super-intelligent machines are, therefore, scientifically unjustified” – an awfully unscientific presumption based on one small piece of data.

Eric Schmidt believes that the Turing test will be passed by 2018. But like Hawking, Musk and the other concerned technologists, Floridi dismisses Schmidt’s fears. “We shall see,” he says of the 2018 date. “So far there has been no progress.”

No progress? Seriously?

That computer whose answer Floridi uses to completely dismiss the notion of AI in the Loebner competition scored 107 out of a possible 120 points on a scale that evaluated answers based on relevance, correctness and clarity of expression. The Washington Post reports that robots can now adequately imitate members of Congress. Some programmers even claim to have already passed the Turing test.

I don’t know what progress looks like to Floridi, but there’s no doubt that machines have become a lot smarter over the last 25 years and will continue to radically gain in intelligence with the doubling of processing power every 18 months. For all of his assertions about what the problems aren’t, Floridi has very little to say about what the problems are.

“We should stop worrying about science fiction and start focusing on the actual challenges that AI poses,” he says. But he doesn’t tell us what those may be, and instead anthropomorphizes the issue by saying that it’s dumb humans, and not smart machines, who are the real problem facing the future.

“No AI version of Godzilla is about to enslave us,” Floridi writes. Yes, he’s right that society isn’t in imminent danger of being taken over by evil, sentient machines. But Hawking, Gates, Musk and Tallinn aren’t writing a script for a Hollywood science fiction movie. Their fear, to borrow from Donald Rumsfeld, is a series of unknown unknowns that could enable a self-conscious machine.

The fact that something hasn’t happened yet doesn’t rule out the chance that it will. Given the unrelenting logic of Moore’s Law (a law which, for some reason, the Oxford philosopher questions), it’s just common sense, and anyone who says they don’t see that, including Floridi himself, is either willfully ignorant, or woefully naïve. I’m not sure which is more troubling.

In our age of increasingly smart machines, it’s Mickey Mouse philosophers like Luciano Floridi, I’m afraid, who may be the problem.

Get the TNW newsletter

Get the most important tech news in your inbox each week.