x
Close

How the UK will police AI’s use of creative work

Parliament set a statutory deadline and prescribed five things the government must answer. It did not say who would check the answers.
Aerial 3D model of a city's street network and towers lit in gold against dark blue – representing the regulatory map of AI governance in the UK

Britain has a deadline on AI and copyright. It has not, however, established an enforcer. A House of Lords committee wants a licensing-first regime; ministers want time; artists want payment; AI firms want scale.

The hard problem is enforcement – especially when the data, and the developers, are abroad.

Sections 135 and 136 of the Data Use and Access Act 2025 (the DUAA) required the Department for Science, Innovation and Technology (DSIT) to publish an economic impact assessment of AI’s use of copyrighted material and a report examining five areas: technical measures to control access to creative works; the effect of text and data mining (TDM) on copyright; transparency and disclosure obligations for AI developers; licensing arrangements; and enforcement, including enforcement by a regulator. The statutory deadline was 18 March 2026.

The language was unusually prescriptive. Parliament did not simply ask the government to look into the matter; it required proposals, including in relation to AI systems developed outside the United Kingdom – a nod to the fact that the biggest models are largely trained abroad.

The creative sector responded with overwhelming opposition: more than 11,500 submissions, 88% of which favoured mandatory licensing, and a campaign that included Elton John calling ministers “absolute losers”.

These provisions were inserted during the Act’s passage through the Lords after peers grew frustrated with an earlier consultation. That 2024 exercise offered options ranging from doing nothing to mandatory licensing, and the government signalled a preference for a commercial TDM exception with an opt-out and transparency requirements. The creative sector responded with overwhelming opposition: more than 11,500 submissions, 88% of which favoured mandatory licensing, and a campaign that included Elton John calling ministers “absolute losers”.

The government retreats

By January 2026, the political ground had shifted. Secretaries of State Liz Kendall and Lisa Nandy told the Lords Communications and Digital Committee that the government had been wrong to have expressed any preference and described their approach as a “reset”.

The Financial Times recently reported that ministers would delay contentious copyright rule changes and push any comprehensive AI bill to 2027. The statutory reports were expected to gather further evidence rather than commit to legislation. A person with knowledge of the plans told the FT that “copyright is going to be kicked down the road”.

The retreat left the UK with a statutory framework demanding answers on enforcement, transparency, and licensing – and a government reluctant to provide them.

The Lords draw a line

On the same day the delay was reported, the House of Lords Communications and Digital Committee published a report on AI, copyright, and creative industries. If the timing was coincidental, the message was not.

The committee’s chair, Baroness Keeley, framed the choice in stark economic terms. UK creative industries contributed £124 billion to the economy in 2023 and employed 2.4 million people. The AI sector contributed £12 billion and employed 86,000. “Watering down the protections in our existing copyright regime to lure the biggest US tech companies is a race to the bottom that does not serve UK interests,” Keeley said. “We should not sacrifice our creative industries for AI jam tomorrow.”

The report urged the government to rule out the proposed TDM exception with an opt-out mechanism, develop a licensing-first regime underpinned by statutory transparency obligations, and introduce protections against unauthorised digital replicas and “in the style of” AI outputs.

The committee pointed across the Channel. The European Union’s opt-out mechanism under Article 4 of the 2019 Copyright Directive – the closest match to the UK government’s abandoned approach – had failed to foster a robust licensing market, it concluded.

A cautionary tale from Europe

The EU experience is instructive. Article 4 of the Directive on Copyright in the Digital Single Market permits commercial text and data mining unless rightsholders expressly reserve their rights in an appropriate manner, such as machine-readable means.

In theory, this gives creators control. In practice, it produced a lose-lose scenario: opt-out merely restores the right to say no, without triggering any payment. And when creators do not opt out, there is still no built-in route to remuneration – leaving revenue to private deals largely available only to the biggest repertoire owners.

Enforcement proved equally fraught. A Dutch court ruled in October 2024 that TDM opt-outs must be implemented through machine-readable means, not merely through natural-language terms of service. Across member states, inconsistent interpretations compounded the confusion.

The European Parliament itself appeared to recognise the problem. A July 2025 study called for stronger enforcement and a more robust licensing framework.

The Getty judgment and the territorial gap

UK courts added their own complication. In November 2025, the High Court handed down its judgment in Getty Images v Stability AI – an early generative-AI copyright dispute to reach trial.

The court found that Stable Diffusion’s model weights do not store or reproduce copyrighted works. They are, the court said, “a product of the pattern and features learned over time during the training process”. Getty Images’ secondary infringement claim failed, and its attempt to bring a representative class action was rejected on the grounds that each work required individualised assessment.

Most significantly for enforcement, the court never addressed whether training conducted outside the UK implicates UK copyright law. Getty withdrew the relevant allegations before trial following an unfavourable pre-trial ruling. The territorial question central to any enforcement framework remains unanswered.

The territorial question central to any enforcement framework remains unanswered — and the biggest models are trained abroad.

This matters because most large-model training occurs in the United States, where fair use doctrine provides a broader defence. Even if the UK strengthens its copyright protections and transparency obligations, enforcing them against overseas developers requires either extraterritorial jurisdiction or import-level regulatory controls. Neither exists.

The enforcement question nobody has answered

The DUAA’s requirement that the government’s report consider “enforcement by a regulator” points to the core problem. No UK body has a clear mandate for overseeing how AI systems use copyrighted material.

The Intellectual Property Office (IPO), which sits under DSIT, is primarily a policy body. It advises on intellectual property law and manages parts of the copyright framework; it is not an operational regulator with investigative powers or penalties.

The Information Commissioner’s Office (ICO) was the most obvious candidate. Peers proposed giving the ICO enforcement responsibility over AI training transparency obligations during the DUAA’s passage. Culture Secretary Chris Bryant rejected the idea, telling MPs that copyright law makes it “very clear” that infringement is actionable by copyright owners and that nothing was changing.

Bryant’s position is legally correct – copyright is a private right, enforced through civil litigation – but it sidesteps the real question. A statutory transparency regime needs supervision, audit, and sanctions at scale. As of March 2026, the government had not said who would provide them.

The UK’s broader approach to AI regulation – described by the government as “pro-innovation” – relies on existing regulators applying existing powers within their sectors. The Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA), Ofcom, and the Information Commissioner’s Office (ICO) each address AI within their remits. But copyright in AI training falls between the cracks: it is not a financial services matter, not a content moderation question, and only tangentially a data protection concern.

The technology of policing

Even if a regulator were designated tomorrow, it would face a formidable technical challenge. Model weights do not contain a readable list of training inputs. Determining whether a particular photograph, novel, or song was in a training set requires either developer disclosure (which depends on transparency obligations the government has not yet legislated) or forensic reverse engineering (which remains unreliable at scale). This is the same explainability problem that Ofqual identified when it concluded AI could not yet mark high-stakes exams.

The Lords committee heard evidence on technical standards that could make enforcement practical. The Coalition for Content Provenance and Authenticity (C2PA) – backed by Adobe, Microsoft, the BBC, and Intel – has developed a specification that attaches tamper-evident metadata to media assets, documenting their origin and edit history.

The technology industry pushed back. Google urged flexibility. Meta warned against freezing today’s approaches into law. Microsoft favoured a technology-agnostic route.

The caution is partly justified, but voluntary provenance infrastructure also keeps enforcement dependent on developer cooperation. As long as provenance tools remain optional, any licensing or transparency regime depends on the very companies it is meant to police – which rather defeats the purpose of regulation.

A licensing market builds from below

While the government deliberated, the licensing industry was not waiting. The Copyright Licensing Agency (CLA), working with Publishers’ Licensing Services (PLS) and the Authors’ Licensing and Collecting Society (ALCS), developed a Generative AI Training Licence, described as the first of its kind in the UK.

The licence is designed as a collective solution for creators who cannot negotiate directly with AI developers. Rather than requiring individual deals, it operates through collective management: the CLA licences on behalf of its members and distributes revenues accordingly.

ALCS reports that 81% of surveyed members support the option of a collective licence.

But the CLA licence depends on a transparency infrastructure that does not yet exist. Without statutory disclosure requirements obliging developers to report what they trained on, collective licensing becomes voluntary – and voluntary compliance from companies built to ingest the largest possible corpus of human creativity is, to put it charitably, uncertain.

How the UK compares

CountryApproachStatus
UKStatutory deadline for report; no enforcement body designated; no legislationDeadline reached 18 March 2026; further consultation expected
AustraliaRuled out TDM exception; licensing-led approachActive since October 2025
EUOpt-out TDM (Article 4); AI Act transparency obligationsFramework in force; enforcement uneven
USTRAIN Act (disclosure bill); fair use litigationPending legislation; courts producing precedent
JapanBroad TDM exception (Article 30-4); commercial use permittedIn force since 2019; guidelines refined January 2024
SingaporeBroad computational data analysis exception (Section 244 Copyright Act)In force since 2021; GenAI governance guide published March 2026

The UK, by contrast, had a statutory deadline, a Lords report, a government delay, no designated enforcement body, and an unresolved question on territorial jurisdiction. It had, in other words, all of the questions and none of the answers.

What regulators in other jurisdictions should take from this

The political dynamics – a creative sector worth £124 billion and an AI sector worth £12 billion – suggest the UK government will continue to defer rather than choose.

But the institutional design lessons extend well beyond Westminster. Three stand out.

Enforcement mandates cannot be an afterthought. The UK’s experience illustrates what happens when a legislature prescribes transparency, licensing, and technical standards but fails to designate who supervises compliance. Regulators in Australia, the EU, and Canada watching the UK’s enforcement vacuum should treat it as a cautionary case: if your framework requires disclosure, someone must be empowered to verify it. This echoes the capability gap widening across Canadian regulators – where AI oversight mandates have arrived without the institutional infrastructure to deliver them.

Territorial reach is the defining constraint. The Getty judgment exposed a problem that applies to any jurisdiction whose AI models are trained overseas. Unless copyright enforcement mechanisms extend to models placed on a domestic market regardless of where training occurred – as the EU AI Act attempts – transparency obligations become unenforceable against the largest developers. Regulators designing AI governance frameworks should build import-level regulatory controls from the outset, rather than attempting to retrofit them.

Voluntary provenance is not provenance. C2PA and similar standards offer a credible technical pathway to verifiable training-data disclosure. But until adoption is mandatory, enforcement depends on the cooperation of the entities being regulated. The lesson for regulators globally: technical standards need statutory backing and audit powers, or they function as aspirational best practice rather than enforceable obligation. Every jurisdiction currently debating AI transparency should answer the enforcement question before the transparency question – not after.

The Lords report created pressure toward a licensing-first model. But pressure is not legislation, and the committee cannot compel ministers to act. Its real value may be specificity: it identified the institutional gap (no enforcement body) and pointed to the technical standards that could make transparency and licensing enforceable at scale.

The enforcement question will not resolve itself. If the government mandates transparency, someone must verify compliance. If it endorses licensing, it needs mechanisms for collective administration and dispute resolution. And if it wants any of this to apply to models trained overseas, it must confront the territorial limits of UK law that Getty left exposed.

The alternative is to leave enforcement to private litigation, which favours large rightsholders with deep pockets and leaves individual creators without practical recourse. The UK has spent two years asking what the rules should be. It has not yet seriously asked who will enforce them. That question is now unavoidable.

The UK has spent two years asking what the rules should be. It has not yet seriously asked who will enforce them.

The technical question – how transparency obligations can be made verifiable in practice – is examined in depth in our analysis of AI governance and the hallucination problem, which sets out why explainability and provenance infrastructure are preconditions for any enforcement regime to function.

Frequently asked questions

As of March 2026, no UK body had been designated to enforce copyright rules as they apply to AI training. The Intellectual Property Office is a policy body, not an operational regulator. The Information Commissioner’s Office was proposed during the DUAA’s passage through the Lords but was rejected by Culture Secretary Chris Bryant, who argued that copyright infringement is actionable by individual rightsholders through civil litigation. The UK’s “pro-innovation” regulatory model relies on existing sector regulators – the FCA, CMA, Ofcom, and ICO – but AI copyright falls between their remits. The House of Lords Communications and Digital Committee identified this enforcement gap as a central concern in its March 2026 report.

The Lords Communications and Digital Committee published a report recommending that the government rule out a commercial text and data mining exception with an opt-out, adopt a licensing-first regime, introduce statutory transparency obligations for AI developers, create protections against digital replicas and “in the style of” AI outputs, and prioritise sovereign AI models. Committee chair Baroness Keeley warned against sacrificing the UK’s “gold-standard” copyright framework and £124 billion creative economy for “AI jam tomorrow”.

What does the Getty v Stability AI case mean for AI training?

The High Court’s November 2025 judgment in Getty Images v Stability AI found that AI model weights do not store or reproduce copyrighted works and therefore do not constitute “infringing copies” under UK law. Getty’s secondary infringement claim failed. Critically, the court did not decide whether AI training conducted outside the UK can infringe UK copyright – Getty withdrew that claim before trial after an unfavourable pre-trial ruling. The case left the territorial question unresolved: since most large-model training takes place in the United States, UK copyright enforcement may have limited reach unless new legislative mechanisms are introduced.

Picture of TMR Editorial Staff

TMR Editorial Staff

Our editors bring clarity and rigour to fast-moving regulatory developments through trusted sources and informed analysis.

POPULAR POSTS

Stay ahead of regulation

News, insight, and analysis weekly

STAY INFORMED