Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on November 1, 2023

Bedazzled by big tech, the UK’s AI summit is overlooking big issues

Elon Musk could be there! Startups, not so much


Bedazzled by big tech, the UK’s AI summit is overlooking big issues Image by: Daniel Oberhaus / Number 10 (edited)

World leaders and tech titans are currently descending on southern England for an AI safety summit, but the flashy event isn’t impressing everyone.

Over the next two days, around 100 bigwigs will attend the event at the historic Bletchley Park, a country estate around 90km north of London. During World War Two, the site was home to the codebreakers who cracked Nazi Germany’s notorious Enigma encryption device. Some 80 years later, the British government wants to show that the UK is still a tech superpower — but the plans have caused alarm.

Critics have various concerns. They worry that the summit organisers are spellbound by “frontier AI,” famous names, and far-flung fears, while overlooking more pressing and inclusive issues.

A show-stealing late addition to the schedule elevated their suspicions. On Monday, Prime Minister Rishi Sunak revealed that he will be “in conversation” with Elon Musk on X.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Musk adds further lustre to a star-studded guest list. 

Among the invitees are several political heavyweights, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen, UN Secretary-General Antonio Guterres, and Chinese Vice Minister Wu Zhaohui.

Also in attendance are various tech titans, such as Microsoft President Brad Smith, OpenAI CEO Sam Altman, and Meta AI chief Yann LeCun. But the event is not for everyone.

“My fear is that the summit will focus on headline-grabbing existential threats.

Much of the tech sector feels that only industry giants and political leaders will be seated at Sunak’s conference table.

Dr Hector Zenil, the founder of healthcare startup Oxford Immune Algorithmics, is worried that the event will be dominated by generative AI and big tech. He has called on Sunak to involve a greater balance of commercial and academic representation.

“If the AI Safety Summit is to be judged a success — or at least on the right path to creating consensus on AI safety, regulation, and ethics — then the UK government must strive to create an even playing field for all parties to discuss the future use cases for the technology,” Zenil said.

“The Summit cannot be dominated by those corporations with a specific agenda and narrative around their commercial interests, otherwise this week’s activities will be seen as an expensive and misleading marketing exercise.”

Zenil’s views are common across the sector. Among the industry insiders who share his unease is Victor Botev, the CTO and co-founder of Iris.ai, an Oslo-based startup.

A former AI researcher at Chalmers University and now a business leader, Botev wants broader representation from both academia and industry at the meeting.

“It is vital for any consultation on AI regulation to include perspectives beyond just the tech giants,” he said. “Smaller AI firms and open-source developers often pioneer new innovations, yet their voices on regulation go unheard. The summit missed a great opportunity by only including 100 guests, who are primarily made up of world leaders and big tech companies.”

Venture capitalists have raised similar concerns. 

“Going forward, we also must have more voices for startups themselves. The AI safety summit’s focus on big tech, and shutting out of many in the AI startup community, is disappointing,” said Ekaterina Almasque, General Partner at European VC OpenOcean

“It is vital that industry voices are included when shaping regulations that will directly impact technological development.”



Frontier AI apocalypses

The glitzy guestlist has been accompanied by a fittingly dramatic agenda. This combination, critics say, is a distraction from more pressing concerns.

They note that the programme will exclusively focus on “frontier” AI systems — a hazy term for advanced, general-purpose AI models. In a recent government report, the term “frontier AI” was applied almost entirely to large language models (LLMs) — particularly OpenAI’s ChatGPT.

Zenil suspects the focus has been influenced by CEOs who are invested in this field. He wants the government to take a broader view.

“It is absolutely critical that the UK has a coherent strategy for AI that encompasses all aspects of the technology and different models. Above all, this is important because no one approach will become the ‘silver bullet’ for AI adoption,” he said.

“If the AI Summit at Bletchley Park and the AI advisory committee are dominated by individuals with a particular research or commercial focus for AI, then it will make it harder to develop regulatory frameworks which reflect all the potential use cases.”

Dr Hector Zenil, Oxford Immune Algorithmics founder
Zenil has also worked as a senior researcher for the government-funded Alan Turing Institute. Credit: Oxford Immune Algorithmics

Another cause of consternation is the summit’s focus on “extreme” hypothetical threats and doomsday scenarios. Sunak has personally highlighted these cataclysmic possibilities.

“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” he said last week.

Such apocalyptic prospects, critics argue, are dramatically overblown.  Some blame the media for inflating the dangers, while others argue that tech bosses exaggerate the risks to conceal the real and present problems that they’re creating.

They are more concerned about the tangible threats of climate change, biases against marginalised groups, and cyber-attacks. They note, for instance, that a recent study found that Google’s AI could soon consume as much electricity as Ireland.

Almasque, from VC firm OpenOcean, fears the summit’s priorities are skewed.

“It looks likely to focus mostly on bigger, long-term risks from AI, and far less on what needs to be done, today, to build a thriving AI ecosystem,” she said. “It’s like a startup worrying about its IPO price before it’s raised seed funding.”

These concerns are shared by Natalie Cramp, CEO of data company Profusion, which has previously advised the UK government. She is wary of the fixation on an imaginary future.

“My fear is that the AI safety summit will focus on headline-grabbing existential threats at the expense of the more mundane problems that have the capacity to do a lot of damage right now,” Cramp said.

Headshot of Natalie Cramp, CEO at data company Profusion,
Natalie Cramp, CEO at data company Profusion.

The build-up to the summit has amplified the dissent. Ahead of the event, Sunak revealed a core component of his plan will be a new “world-first” AI safety institute. 

Dr Asress Gikay, a senior lecturer in AI at Brunel University London, was unimpressed by the announcement. Gikay is dismissive of the institute’s aim to prompt international agreements. He suspects that Sunak has ulterior motivations.

 “The Prime Minister seems more focused on making political statements by unrealistic and unachievable agendas rather than addressing more pressing and attainable issues, such as domestic AI investment and the development of a robust policy and regulatory framework for responsible AI at the national level,” he said. 

Taking chances

Amid the scepticism, there is also optimism about the AI summit’s potential. The big-name attendees and international media attention suggest the UK can be a key player in global developments. 

The country’s thriving AI sector adds credibility to the event, while its pro-innovation approach to regulation provides a point of differentiation from European Union governance. Britain’s unique international position could also provide a bridge between the US, EU, and China. 

Emad Mostaque, CEO of Stability AI —which develops the Stable Diffusion text-to-image model — is among the high-profile supporters of the summit.

“The UK has a once-in-a-generation opportunity to become an AI superpower and ensure that AI benefits all, not just big tech,” he said.

Botev, the co-founder of Iris.ai, is more cautiously hopeful. He is upbeat about the summit’s potential, but worried that the government may make a rash decision for a front-page news story.

“With the global AI community watching, the UK should resist this urge,” he said. “The summit is a chance for the UK to chart a global direction on AI governance, ensuring progress without compromising safety. With care and wisdom, the UK can develop forward-thinking regulations that promote innovation while establishing trust.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with