This article was published on February 25, 2020

Defining humanlike intelligence and entrusting it with our lives, explained by an AI researcher


Defining humanlike intelligence and entrusting it with our lives, explained by an AI researcher

TNW Answers is a live Q&A platform where we invite interesting people in tech who are much smarter than us to answer questions from TNW readers and editors for an hour. 

Yesterday, Melanie Mitchell, the author of ‘Artificial Intelligence: A Guide for Thinking Humans’ and the Davis Professor of Complexity at the Santa Fe Institute, hosted a TNW Answers session where she spoke about how much we should really trust AI, her worries surrounding the technology, and defining humanlike intelligence in machines. 

[Read: Chess grandmaster Gary Kasparov predicts AI will disrupt 96% of all jobs]

Most fears around AI usually stem from Hollywood movies that make us believe that one day autonomous robots will kill all humans and make Earth their own, or these same robots will take away all human meaning as they take our jobs. 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“Most of the movies I’ve seen portray AI as smarter than humans, and usually in a malevolent way. This is very far from the truth,” Mitchell said. “Maybe the most plausible portrayal of AI is the computer in the old Star Trek series — it’s able to answer lots of questions in natural language. We’re still quite far from the abilities of this fictional computer, but I could see us getting to increasingly useful question-answering systems over the next decades, given progress in natural language processing.”

While our biggest worry about AI shouldn’t be the possibility of it killing us in our sleep, the technology does come with some concerns. “I’m quite worried by the generation of fake media, such as deep fakes. I’m also worried about humans trusting machines too much, for example people might trust self-driving cars to drive in conditions where they cannot safely operate. Also, misuse of technologies like facial recognition. These are only *some* of the issues that worry me.”

The limitations of defining humanlike intelligence 

Today, machine intelligence is commonly referred to as ‘thinking,’ and while the potential for this technology is exciting, it’s another concern for Mitchell.

“‘Thinking’ is a very fuzzy term that’s hard to define rigorously and the term gets used pretty loosely. It’s clear that any ‘thinking’ being done by today’s machine intelligence is very different from the kind of ‘thinking’ that we humans do,” Mitchell explained. “But I don’t think there’s anything in principle that will prevent machines from being able to think, the problem is that we don’t understand our own thinking very well at all, so it’s hard for us to figure out how to make machines think. Turing’s classic 1950 paper on “Can Machines Think?” is a great read on this topic.”

This same principle applies to future predictions of achieving humanlike intelligence in machines. “It’s very hard to define [humanlike intelligence] except by using behavioral measures, such as the “Turing Test”. In fact, this was exactly Turing’s point — we don’t have a good definition of “humanlike intelligence” in humans, so it’s going to be hard to define it rigorously for machines,” Mitchell said. “Assuming there is some reasonable way of defining it, I do think it’s something that could be achieved in principle, but it’s always been ‘harder than we thought,’ because much of what we rely on for our intelligence is invisible to us — our common sense, our reliance on our bodies, our reliance on cultural and social artifacts, and so on.”

Can we entrust AI with decisions that affect our lives?

In Mitchell’s latest book “Artificial Intelligence: A Guide for Thinking Humans,” a topic widely covered is how much we should trust AI with decisions that directly affect our lives. “We already trust the AI systems that help fly airplanes, for example, and for the most part these are indeed quite trustworthy, however the 737 MAX problems were a notable exception,” Mitchell said. “But there’s always a human in the loop, and I think that will be essential for any safety-critical application of AI for the foreseeable future. I could see trusting self-driving cars if their operation was limited to areas of cities or highway that had very complete mapping and other infrastructure designed for safety. I think for less constrained driving (and other domains) it will be harder to trust these machines to know what to do in all circumstances.”

In the future, Mitchell predicts we’re going to see a lot more people getting into the evolutionary computation field over the next decade. “I also think machine learning will be increasingly combined with other methods, such as probabilistic models and perhaps even symbolic AI methods.”

You can read the rest of Mitchell’s TNW Answers session here

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top