x
Close

Australia mandates AI training for 200,000 public servants

Canberra has launched the most comprehensive public sector AI transformation in the Commonwealth. But businesses warn that unclear rules on mandatory guardrails are delaying billions in investment – and the plan offers no resolution.
Parliament House in Canberra with abstract data and geometric overlays suggesting AI and public sector transformation.
  • What’s happening: Australia has launched a whole-of-government AI plan that mandates training for 200,000 public servants, appoints Chief AI Officers in every agency, and rolls out a secure APS-wide chatbot.
  • Why it matters: The plan is the most ambitious public-sector AI program in the Commonwealth, yet unresolved guardrails and a still-unsettled regulatory framework are delaying private-sector investment and creating uncertainty for regulators.
  • What’s next: Canberra will finalise its approach to mandatory AI guardrails and release a National AI Plan, decisions that will determine whether the APS strategy delivers coherence or exposes deeper gaps.


By year’s end, every one of Australia’s 200,000 public servants will have completed mandatory AI training. By July, every Commonwealth agency – from Treasury to the tax office – will have appointed a Chief AI Officer. And from April, a secure government chatbot will sit on every desktop, ready to draft briefings, summarise reports, and answer questions about policy.

That is the scale of change Finance Minister Katy Gallagher unveiled on 12 November. The AI Plan for the Australian Public Service 2025 is the most comprehensive attempt by any Commonwealth government to embed artificial intelligence into the machinery of state. It mandates capability, demands governance, and promises infrastructure. 

Whether it delivers depends on factors Canberra cannot fully control.

The plan addresses a capability deficit that has left Australia trailing its peers. Only 22% of Australian organisations report full confidence in AI, according to a November report by Cisco Australia and the Governance Institute of Australia. Some 64% provide no AI training for staff. The Reserve Bank’s liaison survey found firms cautious about adoption, citing regulatory uncertainty, skills shortages, and difficulty identifying high-value use cases.

Minister Gallagher framed the plan as essential to Australia’s economic resilience. “Trust is our licence to operate,” she said at the launch. “AI adoption is not about replacing people – it’s about unlocking new capabilities.”

But that reassurance has done little to settle union concerns. 

The Community and Public Sector Union welcomed training commitments while warning that consultation with workers “is not optional – it must start now and be ongoing.” The Australian Council of Trade Unions has called for mandatory AI Implementation Agreements, job guarantees, and a new National AI Authority to regulate rollout across all sectors.

A more coherent architecture

Rather than a long catalogue of tasks, the plan hangs on three ideas: earning trust, lifting capability, and building the infrastructure to support both. 

Canberra wants agencies to apply the same disciplines to AI that they apply to finance, risk, and security. That means refreshed governance policies, clearer lines of accountability, and a central review mechanism for any system that could affect rights or entitlements. Suppliers will need to disclose where AI sits inside their products and a register of AI assessments will give auditors a single view of how systems are being used across the Commonwealth.

Capability is the plan’s anchor. Every public servant will complete foundational training by December, and senior leaders will take tailored courses to help them steer AI-driven change. Each agency will appoint a Chief AI Officer with responsibility for literacy, risk, and benefits tracking. The Public Service Commission will also require structured consultation with staff and unions as AI begins to reshape work.

Infrastructure may be the hardest part to deliver. GovAI, already live, offers access to multiple AI models on sovereign systems, along with sandboxes for agencies building their own tools. A secure APS-wide chatbot is in pilot and will eventually operate at the PROTECTED level. Finance has also set up an AI Delivery and Enablement team to coordinate efforts and avoid duplication, backed by a shared library of approved applications and reusable code.

A new whole-of-government cloud policy is also slated to arrive soon. Its aim is to tie AI operations to sovereignty standards as well as consistent risk and security frameworks while closing the loop between governance, capability, and technology.

Implications for regulators

For regulators, the plan creates both opportunity and obligation. 

Access to GovAI infrastructure offers the potential to develop custom AI applications for compliance monitoring, risk analysis, case triage, and fraud detection – activities that could reduce administrative burden and free staff for higher-value work. The shared use-case library may accelerate deployment of proven applications across agencies, reducing costs and development time.

But the mandate to appoint Chief AI Officers, implement governance structures, and conduct risk assessments for every AI deployment requires capability that many agencies do not yet possess. Regulatory agencies will need to rapidly uplift AI literacy across all staff, develop specialised skills in AI riskl assessment and algorithmic auditing, and integrate AI governance into existing risk management frameworks. High-risk regulatory AI systems – those used in enforcement, investigation, licensing, or automated decision-making affecting individuals – will require AI Review Committee oversight, demanding robust assurance documentation and ethical frameworks.

The plan aligns AI governance with the Commonwealth Risk Management Policy and the Protective Security Policy Framework, but regulatory agencies must work out how to apply these standards to AI systems that may exhibit bias, data exposure risks, model drift, and accountability gaps. Data governance becomes more complex: agencies must ensure training data, model inputs, and outputs comply with privacy laws, are free from inappropriate bias, and are adequately protected from unauthorised access.

Clarity lags the ambition

The government has drawn on international experience while tailoring provisions to Australian conditions. The approach sits between the EU’s comprehensive, prescriptive AI Act and the UK’s more flexible, guidance-driven strategy. Singapore’s principles-based Model AI Governance Framework and GovTech platforms have influenced the design, while the US’s recent deregulatory turn highlights the divergence in national approaches.

But the regulatory environment remains unsettled. 

In September, the government released a Voluntary AI Safety Standard covering 10 guardrails including accountability, transparency, and human oversight. It also opened consultation on mandatory guardrails for high-risk AI applications. That consultation has closed. The government has not yet announced which legislative approach it will pursue – whether a new AI Act similar to the EU’s comprehensive regime, framework legislation across existing laws, or amendments to sector-specific statutes.

This uncertainty is costly. The Cisco and Governance Institute report warned that Australia risks losing a $142 billion AI opportunity if regulatory clarity does not improve. The Reserve Bank noted that firms are delaying investment until compliance obligations become clear. Industry groups argue that Australia must align with international frameworks – particularly those in the EU, UK, and Singapore – to avoid fragmenting compliance requirements and deterring investment.

Minister for Industry and Innovation Tim Ayres is expected to release a National AI Plan by year’s end. That plan, covering the economy more broadly, is built on three principles: capturing opportunities, sharing benefits, and keeping Australians safe. How it reconciles the APS plan’s mandatory approach with the voluntary framework currently available to private sector organisations will determine whether the government’s AI strategy delivers coherence or contradiction.

The APS plan now has dates, infrastructure, and a pathway to adoption. What it lacks is control over the conditions that will determine its success: a skilled workforce, cooperative vendors, and a settled regulatory framework. Canberra can mandate training and stand up platforms. What it cannot guarantee is that the law will keep pace with the technology it is trying to harness.

Picture of TMR Editorial Staff

TMR Editorial Staff

Our editors bring clarity and rigour to fast-moving regulatory developments through trusted sources and informed analysis.

POPULAR POSTS

Stay ahead of regulation

News, insight, and analysis weekly

STAY INFORMED