Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on November 14, 2022

Why AI governance is important for building more trustworthy, explainable AI

Avoid biased algorithms, negative press, and lawsuits


Why AI governance is important for building more trustworthy, explainable AI

Content provided by IBM and TNW

The dangers of robots evolving beyond our control are well-documented in sci-fi movies and TV — Her, Black Mirror, Surrogates, I, Robot, need we go on?

While this may seem like a far-off fantasy, FICO’s 2021 State of Responsible AI report found that 65% of companies actually can’t explain how specific AI model decisions or predictions are made.

While AI is undeniably helping to propel our businesses and society forward at lightning speed, we’ve also seen the negative impacts a lack of oversight can bring.

Study after study has shown that AI-driven decision-making can potentially lead to biased outcomes, from racial profiling in predictive policing algorithms to sexist hiring decisions.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

As governments and businesses adopt AI tools at a rapid rate, AI ethics will touch many aspects of society. Yet, according to the FICO report, 78% of companies said they were “poorly equipped to ensure the ethical implications of using new AI systems,” and only 38% had data bias detection and mitigation steps.

As is usual with disruptive technologies, the speed of AI development has quickly outpaced the speed of regulation. But, in the race to adopt AI, what many companies are starting to realize is that regulators are now catching up. A number of lawsuits have already been leveled against companies for either developing or simply using biased AI algorithms.

Companies are feeling the heat of AI regulation

This year the EU unveiled the AI Liability Directive, a bill that will make it easier to sue companies for harm caused, part of a wider push to prevent companies from developing and deploying harmful AI. The bill adds an extra layer onto the proposed AI Act, which will require extra checks for “high-risk” uses of AI, such as in the use of policing, recruitment, or healthcare. Unveiled earlier this month, the bill is likely to become law within the next few years.

While some worry the AI Liability Directive will curb innovation, the purpose is to hold AI companies accountable, and require them to explain how their AI systems are built and trained. Tech companies that fail to comply will risk Europe-wide class actions.

While the US has been slower to adopt protective policies, the White House also released the blueprint for an AI Bill of Rights earlier this month which outlines how consumers should be protected from harmful AI:

  1. Artificial intelligence should be safe and effective
  2. Algorithms should not discriminate
  3. Data privacy must be protected
  4. Consumers should be aware when AI is being used
  5. Consumers should be able to opt-out of using it, and speak to a human instead

But there’s a catch. “It’s important to realize that the AI Bill of Rights is not binding legislation,” writes Sigal Samuel, a senior reporter at Vox. “It’s a set of recommendations that government agencies and technology companies may voluntarily comply with — or not. That’s because it’s created by the Office of Science and Technology Policy, a White House body that advises the president but can’t advance actual laws.”

With or without strict AI regulations, a number of US-based companies and institutions have already faced lawsuits for unethical AI practices.

And it’s not just legal fees companies need to be concerned about. Public trust in AI is waning. A study by Pew Research Center asked 602 tech innovators, developers, business and policy leaders, “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” 68% didn’t think so.

Whether or not a business loses a legal battle over allegations of biased AI, the impact that incidents like this can have on a company’s reputation can be just as damaging.

While this puts a dreary light on the future of AI, all is not lost. IBM’s Global AI Adoption Index found that 85% of IT professionals agree that consumers are more likely to choose a company that’s transparent about how its AI models are built, managed, and used.

Businesses that take the steps to adopt ethical AI practices could reap the rewards. So why are so many slow to take the plunge?

The problem may be that, while many companies want to adopt ethical AI practices, many don’t know where to start. We spoke with Priya Krishnan, who leads the Data and AI product management team at IBM, to find out how building a strong AI governance model can help.

AI governance

According to IBM, “AI governance is the process of defining policies and establishing accountability to guide the creation and deployment of AI systems in an organization.”

“Before governance, people were moving straight from experiments to production in AI,” says Krishnan. “But then they realized, ‘well, wait a minute, this is not the decision I expect the system to make. Why is this happening?’ They couldn’t explain why the AI was making certain decisions.”

AI governance is really about making sure that companies are aware of what their algorithms are doing — and have the documentation to back it up. This means tracking and recording how an algorithm is trained, the parameters used in the training, and any metrics used during the testing phases.

Having this in place makes it easy for companies to both understand what’s going on beneath the surface of their AI systems and allows them to easily pull documentation in the case of an audit. Krishnan pointed out that this transparency also helps to break down knowledge silos within a company.

“If a data scientist leaves the company and you don’t have the past information plugged into this hook in processes, it’s very hard to manage. Those looking into the system won’t know what happened. So this process of documentation just provides basic sanity around what’s going on and makes it easier to explain it to other departments within the organization (like risk managers).”

While regulations are still being developed, adopting AI governance now is an important step to what Krishnan refers to as “future-proofing”:

“[Regulations are] coming fast and strong. Now people are producing manual documents for auditing purposes after the fact,” she says. Instead, starting to document now can help companies prepare for any upcoming regulations.

The innovation vs governance debate

Companies may face increasing competition to innovate fast and be first to market. So won’t taking the time for AI governance slow down this process and stifle innovation?

Krishnan makes the argument that AI governance no more stops innovation than brakes stop someone from being able to drive: “There is traction control in a car, there are brakes in a car. All of these are designed to make you go faster, safely. That’s how I would think about AI governance. It’s really to get the most value from your AI, while making sure there are guardrails to help you as you innovate.”

And this lines up with the biggest reason of all to adopt AI governance: it just makes business sense. No one wants faulty products and services. Setting clear and transparent documentation standards, checkpoints, and internal review processes to mitigate bias can ultimately help businesses create better products and improve speed to market.

Still not sure where to start?

Just this month the tech giant launched IBM AI Governance, a one-stop solution for companies struggling to get a better understanding of what’s going on below the surface of these systems. The tool uses automated software to work with companies’ data science platform to develop a consistent and transparent algorithmic model management process, while tracking dev time, metadata, post-deployment monitoring, and customized workflows. This helps take the pressure off of data science teams, allowing them to focus on other tasks. The tool also helps business leaders to always have a view of their models, and supports the appropriate documentation in case of audit.

This is a particularly good option for companies that are using AI across the organization and don’t know what to focus on first.

“Before you buy a car, you want to try it out. At IBM, we invested in a team of engineers that help our clients take AI governance for a test drive to help them get started. In just weeks, the IBM Client Engineering team can help teams innovate with the latest AI Governance technology and approaches using their business models and data. It’s an investment in our clients to quickly co-create using IBM technology so they can get started quickly,” Krishnan says.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top