x
Close

Australia’s regulatory year in review: from online safety to AI in the APS

Ambitious reform meets implementation strain in one of Australia's most consequential regulatory years.
Parliament House in Canberra under a dramatic blue sky, viewed from the forecourt

2025 marked a decisive inflection point for Australian regulation.

What had been, for much of the last decade, a cycle of inquiries, royal commissions, and discussion papers tipped decisively into implementation – particularly in digital governance, care systems, and cyber security. 

Canberra moved from sketching frameworks to enforcing them, while simultaneously trying to modernise the public service’s own digital capacity and codify better regulatory practice.

The result is a system that, by December 2025, looks more interventionist, more digitally ambitious, and more complex to navigate. 

Social media platforms face world-first age restrictions and the prospect of a statutory digital duty of care. Aged-care providers are operating under a new legislative architecture that formally centres the rights of older people. Businesses are adjusting to a mandatory ransomware reporting regime, revamped procurement rules, and looming digital-asset licensing deadlines. And inside the state, the Australian Public Service (APS) is being pushed to adopt AI at scale while adhering to newly articulated principles of good regulation.

Taken together, these moves suggest a regulatory state that is less shy about imposing hard obligations, more willing to use structural levers, and still wrestling with how to balance safety, privacy, growth, and administrative capacity.

Teenagers sitting in a row using smartphones, symbolising under-16 social media use.
Australia’s new law will ban under-16s from using social media platforms from December 2025.

Child safety, platforms, and the new online safety settlement

Nothing symbolised Australia’s appetite for direct intervention more clearly than the social media minimum-age regime, which we covered in September. It is slated to commence on 10 December 2025. 

Framed politically as a child-protection measure and operationally as an amendment to the Online Safety Act 2021 (via the Online Safety Amendment), the reform makes Australia an early mover in legislating a hard age floor for mainstream social media accounts.

The law forces platforms to implement “reasonable” age-assurance measures to prevent under-age users from opening or maintaining accounts, backed by penalties and enforcement powers for the eSafety Commissioner, as set out in the Online Safety Act and associated regulations. 

The detail matters: designating who is in scope, what counts as acceptable age verification, and where liability falls if a child slips through. But the broader suggestion is that Canberra is no longer content to rely on platform policies or codes of practice in this space.

At the same time, the government opened consultation on legislating a broader digital duty of care under the Online Safety Act. Where the age ban targets a specific harm vector (children’s exposure to social media), the duty-of-care concept is more ambitious: an overarching obligation on large online services to take reasonable steps to prevent foreseeable online harms across their systems, building on recommendations from the statutory review of the Act.

For platforms, the combination of an age ban and a statutory duty of care points to a future in which Australian regulators have much greater leverage over recommendation systems, abuse reporting, and risk-assessment processes. For rights and privacy advocates, it raises familiar questions: how to avoid normalising intrusive age-verification, what thresholds of harm justify system-level mandates, and how to preserve free expression in a more tightly governed online space.

These online-safety moves also sit in a wider global conversation. While the UK’s Online Safety Act and the EU’s Digital Services Act pursue similar goals via risk-based regimes rather than age floors, Australia’s strategy is notable for its willingness to set clear, hard boundaries and rely heavily on a single specialist regulator.

A new legislative architecture for aged care

If online safety is the most visible frontier, the Aged Care Act 2024’s commencement on 1 November 2025 is arguably the most consequential for people’s day-to-day lives. Responding to the Royal Commission into Aged Care Quality and Safety, the new Act rewires the legislative foundations of the system to place older people’s rights and experiences at the centre rather than as an afterthought.

Key features include a formal Statement of Rights for older people receiving care, redesigned funding classifications for support at home, caps on care-management fees, and a lifetime cap on certain resident contributions. Government commitments that existing recipients would not be disadvantaged are politically necessary but administratively demanding. Providers must re-price, renegotiate, and in some cases re-build their service models while dealing with workforce shortages and rising input costs.

For the Department of Health and Aged Care and the Aged Care Quality and Safety Commission, 2025 marked the pivot from design to enforcement. Both regulators face a delicate task: crafting guidance detailed enough to support compliance without collapsing into box‑ticking, while bracing for a likely surge in complaints and appeals as residents and families test the new rights‑based language.

In a sense, aged-care reform encapsulates Australia’s broader regulatory trajectory: a willingness to add explicit rights and structural protections, but with implementation risk concentrated in sectors that are already stretched.

Cyber security: from awareness to compulsion

The entry into force of the Cyber Security Act 2024’s ransomware reporting regime marked another shift – from urging best practice to mandating it. Medium-to-large businesses and critical-infrastructure entities are now required to report ransomware payments and certain cyber-extortion incidents within strict timeframes, under penalty of fines.

The government deliberately framed 2025 as an “education-first” phase. Regulators focused on awareness, guidance, and building reporting channels rather than immediately reaching for sanctions. But the architecture is clearly designed for a second phase in which the Department of Home Affairs and the Australian Cyber Security Centre (ACSC) can use mandatory reporting data to drive both policy and enforcement.

This regime intersects with other parts of the financial-crime and digital-economy landscape: AUSTRAC’s anti-money-laundering reforms, emerging expectations on boards and directors around cyber governance, and the renewed attention to scam prevention and restitution. For boards and CISOs, 2025 was the year in which ransomware and cyber extortion stopped being treated as purely operational risks and became explicit regulatory obligations.

The state as AI power user – and a cautious rule-setter

Alongside imposing new obligations on private actors, the Australian Government spent 2025 positioning itself as a heavy user of AI, a bold move TMR covered in November. 

The APS AI Plan commits to equipping all public servants with foundational AI literacy and access to secure tools, anchored around the GovAI platform – a centrally hosted, government-controlled generative-AI and AI-hosting service.

GovAI is explicitly pitched as sovereign infrastructure: Australian-based hosting, predefined guardrails, and the ability for agencies to experiment with models from multiple vendors while avoiding vendor lock-in and uncontrolled data flows. The narrative emphasises responsible and transparent use, with a National Framework for the Assurance of AI in Government, an APS-wide Policy for the Responsible Use of AI in Government, and related technical standards providing a governance backdrop.

This internal embrace of AI sits in productive tension with the government’s stance on copyright and AI training data. In 2025, Canberra rejected calls for a broad text-and-data-mining exception for AI training, preferring to steer industry towards licensing-based models and to work through forums such as the Copyright and AI Reference Group (CAIRG).

Taken together, these decisions suggest that creator rights and negotiated access remain central to the government’s approach. Where some jurisdictions see expansive TDM exceptions as an innovation catalyst, Australia is suggesting that copyright and compensation will not be casually traded away for AI development speed.

The net effect is a distinctive posture: enthusiastic deployment of AI within government under tight governance, coupled with a relatively conservative approach to commercial AI training. For regulated entities, that means engaging with a state that increasingly understands AI from the inside, but is unlikely to liberalise copyright at the expense of rights-holders.

Markets, money, and the rules of government buying

On the economic-regulation front, 2025 saw progress on several fronts – none as individually headline-grabbing as the social-media ban, but collectively important.

The overhaul of the Commonwealth Procurement Rules, effective 17 November 2025, updated thresholds and placed new emphasis on supporting Australian businesses and small and medium enterprises. Procurement officials are now expected to give greater weight to local industry participation and ethical conduct in value-for-money assessments, and to use revised negotiation levers. For suppliers, particularly SMEs, this offers opportunities, but also additional compliance complexity in demonstrating social and local content value.

In financial services, the government and regulators moved to make the system more navigable through a Regulatory Initiatives Grid, coordinating the timing of ASIC, APRA, the Reserve Bank of Australia (RBA) and Treasury reforms to avoid overlapping consultations and implementation crunches. 

At the same time, ASIC proposed a firmer approach to digital-asset regulation, with a licensing deadline on the horizon for exchanges and custodians. The message is: innovation is welcome, but not at the expense of basic investor protections and prudential standards.

Proposed reforms to the foreign investment framework, including a more risk-sensitive approach that distinguishes between low-risk and high-risk sectors and investors, reinforced this theme. The message? Australia wants to remain open to capital, but on terms that allow it to tighten scrutiny where national-interest or security risks are concentrated.

The meta-layer: how regulation itself is being regulated

Threaded through these sectoral moves are efforts to codify what “good regulation” looks like from the centre of government. The Regulatory Policy, Practice and Performance Framework (RPPPF), released in 2025, articulates principles that regulators and policymakers are expected to apply: targeted and risk-based interventions, user-centred design, integration with broader systems, evidence-driven decision-making, digital adaptability, and continuous improvement.

In practice, frameworks like the RPPPF matter in three ways. They provide cover for regulators who want to redesign their own processes; they give regulated entities a language to push back when interventions feel out of step with those principles; and they offer Treasury and Finance a lens for reviewing portfolios and budgets.

Relatedly, the APS digital-workforce and capability agenda – of which the AI Plan is one part – acknowledges that regulation is only as good as the people and systems enforcing it. Without enough digitally literate regulators, cyber-security rules, AI guidance, and complex financial-services reforms will either be under-enforced or enforced unevenly.

Finally, 2025 also kept a spotlight on legal-profession accountability, defamation, and gambling-advertising. 

Investigative reporting raised uncomfortable questions about whether legal regulators are willing and able to discipline misconduct, and whether Australia’s defamation laws unduly chill public-interest journalism – issues with direct implications for how regulatory failure is surfaced and debated. Media companies’ reliance on gambling advertising sharpened the stakes of any future reforms in that domain. 

While not yet crystallised into major legislative packages, these debates form part of the backdrop against which other regulatory choices are made.

Three takeaways – and the 2026 test

Looking back over 2025, three themes stand out in Australia’s regulatory story.

First, implementation has finally caught up with diagnosis. In online safety, aged care, cyber security, and procurement, years of inquiries and consultations have yielded concrete obligations with dates, penalties, and named regulators. The question for 2026 is less “what should we do?” and more “can the system actually do what it has promised?”

Second, digital governance now runs through almost every major reform. Whether the subject is children on social media, ransomware payments, AI in government, or digital assets in financial markets, the core questions revolve around data, platforms, and cross-border technology firms. Australia is positioning itself as both a demanding regulator of those firms and a sophisticated digital actor in its own right.

Third, the state is trying to regulate itself as much as others. The RPPPF, AI assurance frameworks, APS workforce plans, and public debates about legal accountability all indicate a recognition that legitimacy depends not just on the content of rules, but on how transparently and competently they are made and enforced.

The real test will come throughout 2026. The social media age ban will move from legislative text to lived reality. The digital duty-of-care consultation will either harden into enforceable duties or be reshaped by backlash. Aged-care reforms will face their first full year of operation, cyber-security reporting will leave its education phase, and digital-asset licensing will start to bite. 

By this time next year, it will be clearer whether 2025 marked the beginning of a more coherent, rights-aware, digitally mature regulatory state – or simply the latest wave of obligations in an already crowded sea.

Picture of Paul Leavoy

Paul Leavoy

The Modern Regulator Managing Editor Paul Leavoy is a seasoned journalist and regulatory analyst with over two decades of experience writing about technology, public policy, and regulation.

POPULAR POSTS

Stay ahead of regulation

News, insight, and analysis weekly

STAY INFORMED