This article was published on February 5, 2024

UK fails to reach consensus on AI copyright code in major blow to artists

Creators are tired of machines stealing their work


UK fails to reach consensus on AI copyright code in major blow to artists

The UK government, AI companies, and creative organisations have failed to reach consensus on a proposed code that would set clear guidelines for the training of AI models on copyrighted material.

For almost a year, the Intellectual Property Office (IPO) has been consulting with companies including Microsoft, Google DeepMind, and Stability AI as well as various art and news organisations like the BBC, the British Library, and the Financial Times.  

The purpose of the talks was to produce a rulebook on text and data mining, where AI models are trained on materials like books, images, and films produced by humans — often under copyright.  

However, the IPO-mediated consortium has been unable to agree on a voluntary code of practice, reports the Financial Times. This means that the IPO has returned the responsibility back to officials at the Department for Science Innovation and Technology, which is unlikely to set out definitive policies any time soon, said the publication, citing people familiar with the matter.

The breakdown in talks deals a blow to creative professionals, many of whom are afraid that their work will be copied and reproduced without credit or payment. 

Many AI tools, like OpenAI’s ChatGPT or Stability AI’s text-to-image generator Stable Diffusion, are trained on data scraped from the web. Inspired by this data, the systems then deliver endless creations in response to prompts. Frequently, the outputs are clear derivations of their source material. 

In 2023 alone, hundreds of pages of litigation and countless articles accused tech firms of stealing artists’ work to train their AI models. One of the most high profile cases was in the US, where the New York Times recently sued OpenAI and Microsoft for copyright infringement. 

The use of AI has grown rapidly across the entertainment industry in recent years, from automated audiobooks and voice assistants to deepfake videos and text-to-speech tools. But the law has failed to keep pace. 

In the UK, Equity, a trade union representing 50,000 performers and creative practitioners, launched its Stop AI Stealing the Show campaign to lobby the government to update the law and better protect artists’ livelihoods. 

Equity told UKTN today that it is ready for “industrial action” reminiscent of the 2023 Hollywood strikes if key agreements are not reached regarding AI and intellectual property. Liam Budd, an official at the trade union, criticised the government’s “wait and see approach” to AI regulation. 

But it’s not just artists calling for fair use of AI. Generative AI pioneer Ed Newton-Rex quit Stability AI in November over the startup’s use of copyrighted content. In January, he launched a non-profit called Fairly Trained, which certifies AI companies who source their data ethically. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top