Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on June 9, 2023

Inside Google DeepMind’s approach to AI safety

The lab's COO shares her strategy


Inside Google DeepMind’s approach to AI safety Image by: Anamul Rezwan (edited)

This article features an interview with Lila Ibrahim, COO of Google DeepMind. Ibrahim will be speaking at TNW Conference, which takes place on June 15 & 16 in Amsterdam. If you want to experience the event (and say hi to our editorial team!), we’ve got something special for our loyal readers. Use the promo code READ-TNW-25 and get a 25% discount on your business pass for TNW Conference. See you in Amsterdam!

AI safety has become a mainstream concern. The rapid development of tools like ChatGPT and deepfakes has sparked fears about job losses, disinformation — and even annihilation. Last month, a warning that artificial intelligence posed a “risk of extinction” attracted newspaper headlines around the world.

The warning came in a statement signed by more than 350 industry heavyweights. Among them was Lila Ibrahim, the Chief Operating Officer of Google DeepMind. As a leader of the pioneering AI lab, Ibrahim has a front-row view of the threats — and opportunities.

DeepMind has delivered some of the field’s most striking breakthroughs, from conquering complex games to revealing the structure of the protein universe.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The company’s ultimate mission is to create artificial general intelligence, a nebulous concept that broadly refers to machines with human-level cognitive abilities. It’s a visionary ambition that needs to remain grounded in reality — which is where Ibrahim comes in. 

In 2018, Ibrahim was appointed as DeepMind’s first-ever COO. Her role oversees business operations and growth, with a strong focus on building AI responsibly.

“New and emerging risks — such as bias, safety and inequality — should be taken extremely seriously,” Ibrahim told TNW via email. “Similarly, we want to make sure we’re doing what we can to maximize the beneficial outcomes.”

Lila Ibrahim
Prior to joining DeepMind, Ibrahim was COO of Coursera, where she helped open up access to education. Credit: Google DeepMind

Much of Ibrahim’s time is dedicated to ensuring that the company’s work has a positive outcome for society. Ibrahim highlighted four arms of this strategy.

1. The scientific method

To uncover the building blocks of advanced AI, DeepMind adheres to the scientific method.

“This means constructing and testing hypotheses, stress-testing our approach and results through the scrutiny of peer review,” says Ibrahim. “We believe the scientific approach is the right one for AI because the roadmap for building advanced intelligence is still unclear.”

2. Multidisciplinary teams

DeepMind uses various systems and processes to guide its research into the real world. One example is an internal review committee. 

The multidisciplinary team includes machine learning researchers, ethicists, safety experts, engineers, security buffs, and policy professionals. At regular meetings, they discuss ways to expand the tech’s benefits, changes to research areas, and projects that need further external consultation. 

“Having an interdisciplinary team with a unique set of perspectives is a crucial component of building a safe, ethical, and inclusive AI-enabled future that benefits us all,” says Ibrahim.

3. Shared principles

To guide the company’s AI development, DeepMind has produced a series of clear, shared principles. The company’s Operating Principles, for instance, define the lab’s commitment to mitigating risk, while specifying what it refuses to pursue — such as autonomous weapons. 

“They also codify our aim to prioritize widespread benefit,” says Ibrahim.

4. Consulting external experts

One of Ibrahim’s chief concerns involves representation. AI has frequently reinforced biases, particularly against marginalised groups, who tend to be underrepresented in both the training data and the teams building the systems.

To mitigate these risks, DeepMind works with external experts on topics such as bias, persuasion, biosecurity, and responsible deployment of models. The company also engages with a broad range of communities to understand tech’s impact on them.

“This feedback enables us to refine and retrain our models to be appropriate for a broader range of audiences,” says Ibrahim.

The engagement has already delivered powerful results.

The business case for AI safety

In 2021, DeepMind cracked one of biology’s biggest challenges: the protein-folding problem.

Using an AI program called AlphaFold, the company predicted the 3D structures of almost every known protein in the universe — about 200 million in total. Scientists believe the work could dramatically accelerate drug development.

“AlphaFold is the singular and momentous advance in life science that demonstrates the power of AI,” said Eric Topol, director of the Scripps Research Translational Institute. “Determining the 3D structure of a protein used to take many months or years, it now takes seconds.”

Credit: DeepMind
AlphaFold predicts a protein’s 3D structure from its amino acid sequence. Credit: DeepMind

AlphaFold’s success was guided by a diverse array of external experts. In the initial phases of the work, DeepMind investigated a range of big questions. How could AlphaFold accelerate biological research and applications? What might be the unintended consequences? And how could the progress be shared responsibly?

In search of answers, DeepMind sought input from over 30 leaders across fields ranging from biosecurity to human rights. Their feedback guided DeepMind’s strategy for AlphaFold. 

In one example, DeepMind had initially considered omitting predictions for which AlphaFold had low confidence or high predictive uncertainty. But the external experts recommended retaining these predictions in the release.

DeepMind followed their advice. As a result, users of AlphaFold now know that if the system has low confidence in a predicted structure, that’s a good indication of an inherently disordered protein.

Scientists across the world are reaping the rewards. In February, DeepMind announced that the protein database has now been used by over 1 million researchers. Their work is addressing major global challenges, from developing malaria vaccines to fighting plastic pollution

“Now you can look up a 3D structure of a protein almost as easily as doing a keyword Google search — it is science at digital speed,” says Ibrahim. 

Responsible AI also requires a diverse talent pool. To expand the pipeline, DeepMind works with academia, community groups, and charities to support underrepresented communities.

The motivations aren’t solely altruistic. Closing the skills gap will produce more talent for DeepMind and the wider tech sector.

As AlphaFold demonstrated, responsible AI can also accelerate scientific advances. And amid growing public concerns and regulatory pressures, the business case is only getting stronger.

To hear more from Lila Ibrahim, use the promo code READ-TNW-25 and get a 25% discount on your business pass for TNW Conference.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with