Online platforms hosting pornography in the UK now face a countdown to July 2025, when they must implement “highly effective” age assurance systems or risk serious penalties under the Online Safety Act 2023. With guidance now published outlining what constitutes acceptable age verification, the UK’s communications regulator, Ofcom, has moved the regime into its next phase: perhaps its most contentious yet.
For regulators, this marks a defining moment in platform oversight: the shift from setting child protection principles to enforcing them through clear, mandatory standards and measurable outcomes. The UK is no longer just regulating online harms in theory; it is now enforcing rules that hinge on how platforms assess age – an area fraught with privacy, efficacy, and design challenges.
Deadline confirmed, obligations detailed
From July 2025, services that either publish or host pornographic content or user-generated pornography (known as Part 5 and Part 3 services respectively) must ensure that children cannot access such material. Ofcom, which recently published a policy statement on the protection of children online, has made clear that only “highly effective age assurance” methods will suffice. According to its published guidance, this means adopting tools capable of reliably determining whether a user is over 18, without relying solely on self-declaration or superficial barriers.
Part 5 services, which include publisher-run adult sites and certain generative AI tools, are expected to begin implementing robust age checks immediately. Part 3 services, such as mainstream social media or video-sharing platforms with adult content, must be fully compliant by the July deadline.
Ofcom’s guidance stops short of mandating specific technologies, but it does set performance expectations.
Acceptable approaches may include age verification through identity documents, biometric estimation tools, or data-driven assurance systems that assess age with a high degree of confidence. Platforms must also demonstrate that these methods are proportionate, privacy-conscious, and resistant to circumvention.
What counts as “highly effective”
The regulator’s position reflects an evolution in online safety expectations. Vague assertions of good intent or opt-in age gates are no longer sufficient. Ofcom expects platforms to integrate assurance mechanisms that demonstrably prevent children from accessing pornographic content, regardless of the device, app, or browsing context.
This technical specificity marks a shift from earlier online safety debates, which often stalled on generalities. For platform operators, the message is clear: they will need to show their systems work, not just that they exist.
Wider safety duties in parallel
These requirements arrive alongside a broader enforcement timeline. Since March 2025, over 40 new safety measures have been in effect under the Act. These include obligations to name senior compliance officers, improve content moderation training, implement rapid takedown protocols, and offer safer default settings for children.
All regulated platforms must also conduct illegal content risk assessments within three months of Ofcom’s final guidance. Enforcement for failure to comply is significant and can net offenders fines of up to £18 million or 10% of global turnover, with the possibility of service blocking in the UK for serious or repeated breaches.
Implementation gaps remain
Despite this progress, some concerns persist. Regulators and advocates have raised alarms over livestreaming, which remains under-addressed in current codes despite its prominence in Ofcom’s own risk assessments.
“We are promised proposals on this – and a number of other measures – in a further consultation due in April,” noted the Online Safety Act Network in February 2025. But the group also warned that “those measures won’t appear in a new version of the illegal harms code nor be enforceable until well into 2026”.
That delay introduces uncertainty for platforms offering real-time content and may require further interim guidance or soft enforcement approaches.
The evolving regulatory playbook for digital safety
Ofcom is expected to release further consultations this year, including new provisions for detecting illegal content through automated tools. Stakeholder submissions on how to restrict content promoting terrorism and child sexual abuse closed in March, with updates anticipated later in the year.
At the same time, the Department for Science, Innovation and Technology has issued a statement of strategic priorities for Ofcom, helping guide the regulator’s decisions as it applies the Act’s framework in practice.
International regulators are watching closely. The UK’s age assurance provisions represent one of the most detailed and enforceable frameworks in this area. How platforms respond – and how effectively Ofcom enforces the rules – will likely shape similar initiatives abroad.