Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on October 6, 2018

We need to build AI systems we can trust


We need to build AI systems we can trust

Today, artificial intelligence (AI) systems are routinely being used to support human decision-making in a multitude of applications. AI can help doctors to make sense of millions of patient records; farmers to determine exactly how much water each individual plant needs; and insurance companies to assess claims faster. AI holds the promise of digesting large quantities of data to deliver invaluable insights and knowledge.

Yet broad adoption of AI systems will not come from the benefits alone. Many of the expanding applications of AI may be of great consequence to people, communities, or organizations, and it is crucial that we be able to trust their output. What will it take to earn that trust?

Making sure that we develop and deploy AI systems responsibly will require collaboration among many stakeholders, including policymakers and legislators but instrumenting AI for trust must start with science. We as technology providers have the ability — and responsibility — to develop and apply technological tools to engineer trustworthy AI systems.

I believe researchers, like myself, need to shoulder their responsibility and direct AI down the right path. That’s why I’ve outlined here below how we should approach this.

Designing for trust

To trust an AI system, we must have confidence in its decisions. We need to know that a decision is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure.

Reliability, fairness, interpretability, robustness, and safety are the underpinnings of trusted AI. Yet today, as we develop new AI systems and technologies, we mostly evaluate them using metrics such as test/train accuracy, cross validation, and cost/benefit ratio.

We monitor usage and real-time performance, but we do not design, evaluate, and monitor for trust. To do so, we must start by defining the dimensions of trusted AI as scientific objectives, and then craft tools and methodologies to integrate them into the AI solution development process.

We must learn to look beyond accuracy alone and to measure and report the performance of the system along each of these dimensions. Let’s take a closer look at four major parts of the engineering “toolkit” we have at our disposal to instrument AI for trust.

1. Fairness

The issue of bias in AI systems has received enormous attention recently, in both the technical community and the general public. If we want to encourage the adoption of AI, we must ensure that it does not take on our biases and inconsistencies, and then scale them more broadly.

The research community has made progress in understanding how bias affects AI decision-making and is creating methodologies to detect and mitigate bias across the lifecycle of an AI application: training models; checking data, algorithms, and service for bias; and handling bias if it is detected. While there is much more to be done, we can begin to incorporate bias checking and mitigation principles when we design, test, evaluate, and deploy AI solutions.

2. Robustness

When it comes to large datasets, neural nets are the tool of choice for AI developers and data scientists. While deep learning models can exhibit super-human classification and recognition abilities, they can be easily fooled into make embarrassing and incorrect decisions by adding a small amount of noise, often imperceptible to a human.

Exposing and fixing vulnerabilities in software systems is something the technical community has been dealing with for a while, and the effort carries over into the AI space.

Recently, there has been an explosion of research in this area: new attacks and defenses are continually identified; new adversarial training methods to strengthen against attack and new metrics to evaluate robustness are being developed. We are approaching a point where we can start integrating them into generic AI DevOps processes to protect and secure realistic, production-grade neural nets and applications that are built around them.

3. Explaining algorithmic decisions

Another issue that has been on the forefront of the discussion recently is the fear that machine learning systems are “black boxes,” and that many state-of-the-art algorithms produce decisions that are difficult to explain.

A significant body of new research work has proposed techniques to provide interpretable explanations of black-box models without compromising their accuracy. These include local and global interpretability techniques of models and their predictions, the use of training techniques that yield interpretable models, visualizing information flow in neural nets, and even teaching explanations.

We must incorporate these techniques into AI model development and DevOps workflows to provide diverse explanations to developers, enterprise engineers, users, and domain experts.

4. Safety

Human trust in technology is based on our understanding of how it works and our assessment of its safety and reliability. We drive cars trusting the brakes will work when the pedal is pressed. We undergo eye laser surgery trusting the system to make the right decisions.

In both cases, trust comes from confidence that the system will not make a mistake, thanks to system training, exhaustive testing, experience, safety measures and standards, best practices and consumer education. Many of these principles of safety design apply to the design of AI systems; some will have to be adapted, and new ones will have to be defined.

For example, we could design AI to require human intervention if it encounters completely new situations in complex environments. And, just as we use safety labels for pharmaceuticals and foods, or safety datasheets in computer hardware, we may begin to see similar approaches for communicating the capabilities and limitations of AI services or solutions.

Evolving AI in an agile and open way

Every time a new technology is introduced, it creates new challenges, safety issues, and potential hazards. As the technology develops and matures, these issues are better understood and gradually addressed.

For example, when pharmaceuticals were first introduced, there were no safety tests, quality standards, childproof caps, or tamper-resistant packages. AI is a new technology and will undergo a similar evolution.

Recent years have brought extraordinary advances in terms of technical AI capabilities. The race to develop better, more powerful AI is underway. Yet our efforts cannot be solely directed towards making impressive AI demonstrations. We should invest in capabilities that will make AI not just smart, but also responsible.

As we move forward, I believe researchers, engineers, and designers of AI technologies should be working with users, stakeholders, and experts from a range of disciplines to understand their needs, to continually assess the impact and implications of algorithmic decision-making, to share findings, results and ideas, and address issues proactively, in an open and agile way. Together, we can create AI solutions that inspire confidence.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with