Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on May 4, 2023

UK competition watchdog probes AI market amid safety concerns

Watch out Big Tech


UK competition watchdog probes AI market amid safety concerns

The UK’s competition watchdog has launched a review of the artificial intelligence market, in an effort to weigh up the potential opportunities and risks of a technology Bill Gates dubs as “revolutionary as mobile phones and the Internet.”

The Competition and Markets Authority (CMA) said it would investigate the systems underpinning tools such as ChatGPT in order to evaluate the competition rules and consumer protections that may be required. This, the CMA stated, is to ensure the development and deployment of AI tools is done in a safe, secure, and accountable manner. 

“It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information,” said CMA chief executive, Sarah Cardell.  

The CMA has set a deadline for views and evidence to be submitted by June 2, with plans to report its findings in September.  

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The announcement comes as regulators across the world tighten their grip on the development of generative AI — a technology which can generate text, images, and audio virtually indistinguishable from human output. Hype around this type of AI has been swiftly followed by fears over its impact on jobs, industry, education, privacy — virtually all aspects of daily life.   

In late March, more than 2,000 industry experts and executives in North America — including researchers at DeepMind, computer scientist Yoshua Bengio, and Elon Musk — signed an open letter, calling for a six-month pause in the training of systems more powerful than GPT-4, ChatGPT’s successor. The signatories cautioned that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Meanwhile, Dr Geoffry Hinton, widely referred to as AI’s “godfather,” quit his job at Google this week to talk about the dangers of the technology he helped develop. Hinton fears that generative AI tools could inundate the internet with fake photos, videos, and texts to the extent that an average person won’t be able to “tell what’s true anymore.”

And yesterday, former UK government chief scientific adviser Sir Patrick Vallance told MPs on the Science, Innovation and Technology Committee that AI could have as big an impact on jobs as the industrial revolution.

Anita Schjøll Abildgaard, CEO and Co-founder of Norwegian startup Iris.ai, is optimistic that the probe will address some of these fears and “uphold consumer protections and safely progress the development of AI,” she told TNW. Abildgaard also hopes the review will help address the “competitive imbalance” and “lack of disclosure” present in Big Tech’s proprietary data and training models.

However, while the CMA and many others are clearly concerned about the impacts of AI tools developed by firms such as OpenAI, Microsoft, and Google, Cardell is adamant that the review would not be targeting any specific companies. Rather, she said that the “fact-finding mission” would engage with “a whole host of different interested stakeholders, [including] businesses, academics, and others, to gather a rich and broad set of information”. 

Cardell is also clear that the CMA doesn’t wish to stifle the growth of the rapidly emerging AI industry, but promote it, albeit with a few safeguards. “It’s a technology that has the potential to transform the way businesses compete as well as drive substantial economic growth,” she said. 

A UK government white paper published in March follows a similar trend, signalling ministers’ preference to avoid setting any bespoke rules (or oversight bodies) to govern the uses of AI at this stage. This differs from the EU which is currently in the later stages of finalising its landmark AI Act — the world’s first AI law by a major regulatory body.

While the EU has been first out the gate, according to a new report by the Centre for Data Innovation, politicians should avoid getting swept up in the “hysteria” and shouldn’t “rush to regulate AI before anyone else does because that likely will bode ill, and lead to missed opportunities, for society.”  

Whatever the case may be, the rapid emergence of generative AI has clearly left governments scrambling to figure out if and how to regulate it.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with