Australia is weeks away from enforcing the world’s first national ban on social media use for children under 16.
A $6.5 million government trial of age-assurance tools showed both promise and pitfalls. Facial estimation can be fast and accurate for adults but struggles near the 16-year mark and shows bias across demographic groups. The law rules out heavy-handed measures such as mandatory use of government Digital ID, and requires data collected for verification to be destroyed after use. Privacy is preserved, but precision is harder to achieve.
Non-compliance carries fines of up to AU$49.5m per breach. With only three months left, platforms face a guessing game. Regulators, meanwhile, must decide how to enforce a bold new experiment that could shape child-safety rules far beyond Australia’s borders.
Legislation on the clock
The law passed Parliament in late 2024 and amends the Online Safety Act 2021.
It sets a hard minimum age of 16 for accounts on designated platforms, with no parental-consent loopholes.
The rules take effect on 10 December. The short runway is already testing regulators, platforms, and the technologies being considered to enforce it.
The Act is deliberately technology-neutral. Companies must take “reasonable steps” to prevent under-16s from using their services, but the eSafety Commissioner has not yet said what counts as reasonable.
That silence has left platforms drafting compliance plans without knowing how their efforts will be judged.
Some are likely to over-engineer solutions and collect too much data. Others may wait until the last minute, hoping clearer rules will land before the deadline.
Either way, regulators risk being flooded with complaints, appeals, and pressure to clarify.
Regulators risk being flooded with complaints, appeals, and pressure to clarify.
The promise and limits of age assurance
To prepare, the government commissioned a $6.5m trial of age-assurance technologies.
The UK-based Age Check Certification Scheme reviewed more than 60 tools from 48 vendors.
Facial estimation emerged as the most promising option: results in under 40 seconds, mean error of around 1.3 years, and high accuracy for adults.
But the trial also flagged serious gaps. Accuracy fell sharply near the 16-year threshold. False positives reached 8.5 per cent for 16-year-olds wrongly flagged as underage.
Bias was clear too. Systems worked less well for non-Caucasian users, female-presenting individuals, and older teenagers.
The report’s conclusion was blunt: no single solution is good enough. A layered approach will be needed, mixing estimation with alternative checks.
Privacy as a constraint
The legislation gives privacy equal billing with enforcement.
Platforms cannot require users to hand over government ID. They cannot compel use of Australia’s Digital ID system.
Data collected for verification must be used only for compliance and destroyed afterwards.
These rules answer civil-liberty concerns while making the technical job harder.
Platforms must find ways to prove a user’s age without holding on to the evidence – a problem regulators have not had to tackle at this scale before.
Non-compliance will be expensive. Fines can reach 150,000 penalty units, or about AU$49.5m per breach.
That figure aligns with penalties in other consumer protection laws. But the prospect of repeated fines against global platforms remains untested.
The real question is whether regulators will use these powers early and aggressively, or take a slower, staged approach.
Supporters see the law as overdue protection for children. But academics, civil-society groups, and even human-rights commissioners have lined up to criticise the design.
They argue the three-month timeline for implementation is unrealistic, the technology is not ready, and the risks of bias and error are too high.
International experts have added their own scepticism, warning that a failed rollout could set back efforts to regulate online platforms elsewhere.
Australia as test case
The government is pitching the reform on the world stage. At the UN General Assembly it will argue for similar rules abroad, casting Australia as a pioneer in child online safety.
But pioneers can stumble. If the system collapses under its own weight, Australia may become a cautionary tale instead.
The eSafety Commissioner now faces the most difficult task: defining “reasonable steps” in guidance that can withstand both political scrutiny and legal challenge.
Too prescriptive, and the regulator will be accused of picking winners among vendors. Too vague, and platforms will accuse the state of setting them up to fail.
The Commissioner will also need to coordinate with the Information Commissioner on privacy, to avoid conflicting instructions.
Early enforcement decisions in 2026 will set the tone. A light-touch start may encourage compliance, but leniency could also embolden platforms to test the limits.
The international dimension
Other jurisdictions are learning from Australia’s example, but some are not far behind.
The UK’s Online Safety Act, which we covered back in May, leaves age verification largely to industry codes, but Ofcom has wide powers to enforce compliance.
The EU’s Digital Services Act requires platforms to assess systemic risks to children, though it stops short of a hard age ban.
In the US, debate is fragmented across state legislatures and the federal Congress.
In the U.S., the legal landscape around social media age restrictions is fragmented and fraught. States such as Tennessee and Mississippi have enacted laws requiring age verification or parental consent – Tennessee’s law entered into force in 2025, while Mississippi’s enforcement has been allowed to proceed after appeals courts lifted blocks
Conversely, Arkansas has seen its age-verification law permanently struck down by a federal judge, citing First Amendment concerns
Utah’s 2023 social media regulation act was also blocked by court order and remains tied up in litigation.
How Australia enforces its ban will influence these debates – either as proof of concept or proof of overreach.
What’s at stake
The law is framed as protecting children. But it is also a test of whether governments can force the world’s largest tech firms to re-engineer their systems in the name of public interest.
If regulators succeed, they will show that enforcement can keep pace with political ambition. If they fail, the lesson may be that ambition outstripped what technology and regulation could deliver.
Either way, the world will be watching on 10 December.