You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on April 23, 2017

Artificial Intelligence has to deal with its transparency problems


Artificial Intelligence has to deal with its transparency problems Image by: agsandrew / Shutterstock

Artificial Intelligence breakthroughs and developments entail new challenges and problems. As AI algorithms grow more advanced, it becomes more difficult to make sense of their inner workings. Part of this is because the companies that develop them do not allow the scrutiny of their proprietary algorithms. But a lot of it has to do with the mere fact that AI is becoming opaque due to its increasing complexity.

And this can turn into a problem as we move forward and Artificial Intelligence becomes more prominent in our lives.

By a long range, AI algorithms perform much better than their human counterparts at the tasks they master. Self driving cars, for instance, which rely heavily on machine learning algorithms, will eventually reduce 90 percent of road accidents. AI diagnosis platforms spot early signs of dangerous illnesses much better than humans, and help save lives. And predictive maintenance can detect signs of wear in machinery and infrastructure in ways that are impossible for humans, preventing disasters and reducing costs.

But AI is not flawless, and does make mistakes, albeit at a lower rate than humans. Last year, the AI-powered opponent in the game Elite Dangerous went berserk and started creating super-weapons to hunt players. In another case, Microsoft’s AI chatbot Tay started spewing out racist comments within a day of its launch. And remember that time Google face recognition started making some offending labeling of pictures?

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

None of these mistakes are critical, and the damage can be shrugged off without much thought. However, neural networks, machine learning algorithms, and other subsets of AI are finding their way into more critical domains. Some of these fields include healthcare, transportation and law, where mistakes can have critical and sometimes fatal consequences.

We humans make mistakes all the time, including fatal ones. But the difference here is that we can explain the reasons behind our actions and bear the responsibility. Even the software we used before the age of AI was code and rule-based logic. Mistakes could be examined and reasoned out, and culpability could be well-defined.

The same can’t be said of Artificial Intelligence. In particular, neural networks, which are the key component in many AI applications, are something of a “black box.” Often, not even the engineers can explain why their algorithm made a certain decision. Last year, Google’s Go-playing AI stunned the world by coming up with moves that professionals couldn’t think of.

As Nils Lenke, Senior Director of Corporate Research at Nuance, says about neural networks, “It’s not always clear what happens inside — you let the network organize itself, but that really means it does organize itself: it doesn’t necessarily tell you how it did it.”

This can cause problems if those algorithms have full control in making decisions. Who will be responsible if a self-driving car causes a fatal accident? You can’t hold any of the passengers accountable for something they didn’t control. And the manufacturers will have a hard time explaining an event that involves so many complexities and variables. And don’t expect the car itself to start explaining its actions.

The same can be said of an AI application that has autonomous control over a patient’s treatment process. Or a risk assessment algorithm that decides whether convicts stay in prison or are free to go.

So can we trust Artificial Intelligence to make decisions on its own? For non-critical tasks, such as advertising, games and Netflix suggestions, where mistakes are tolerable, we can. But for situations where the social, legal, economic and political repercussion can be disastrous, we can’t — not yet. The same goes for scenarios where human lives are at stake. We’re still not ready to forfeit control to the robots.

As Lenke says, “[Y]ou need to look at the tasks at hand. For some, it’s not really critical if you don’t fully understand what happens, or even if the network is wrong. A system that suggests music, for example: all that can go wrong is, you listen to boring piece of music. But with applications like enterprise customer service, where transactions are involved, or computer-assisted clinical documentation improvement, what we typically do there is, we don’t put the AI in isolation, but we have it co-work with a human being.”

For the moment Artificial Intelligence will show its full potential in complementing human efforts. We’re already seeing inroads in fields such as medicine and cybersecurity. AI takes care of data-oriented research and analysis and presents human experts with invaluable insights and suggestions. Subsequently, the experts make the decisions and assume responsibility for the possible consequences.

In the meantime, firms and organization must do more to make Artificial Intelligence more transparent and understandable. An example is OpenAI, a nonprofit research company founded by Tesla’s Elon Musk and YCombinator’s Sam Altman. As the name suggests, OpenAI’s goal is to open AI research and development to everyone, independent of financial interests.

Another organization, Partnership on AI, aims to raise awareness on and deal with AI challenges such as bias. Founded by tech giants including Microsoft, IBM and Google, the Partnership will also work on AI ethics and best practices.

Eventually, we’ll achieve — for better or worse — Artificial General Intelligence, AI that is on par with the human brain. Maybe then, our cars and robot will be able to go to court and stand trial for their actions. But then, we’ll be dealing with totally different problems.

That’s for the future. In the present, human-dominated world, to make critical decisions, you either have to be flawless or accountable. For the moment, AI falls within none of those categories.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with