x
Close

The regulator’s AI problem: when federal policy fails, provinces fill the gap

Canadian flag flying beside the Supreme Court of Canada in Ottawa

In January 2026, a mid-sized Ontario regulator receives notice that it must comply with new artificial intelligence standards under Bill 194

The legislation requires public sector entities to establish AI accountability frameworks, implement risk management protocols, ensure human oversight, and meet technical standards yet to be prescribed by regulation.

The regulator has a problem.

It lacks AI specialists. Its most technology‑literate staff manage legacy databases and coordinate with external IT contractors. No one on the team has formal training in machine learning, algorithmic bias detection, or model explainability. 

The federal Regulators’ Capacity Fund, which might have supported capability building, exhausted its C$14.2 million budget in March 2025 with no replacement announced. The federal AI legislation that was meant to provide regulatory clarity – the Artificial Intelligence and Data Act (AIDA) – died in a parliamentary committee when Parliament prorogued.

So where does the regulator turn? The answer exposes the structural problem facing Canadian regulators. 

Federal AI policy collapsed without producing a coherent framework. Provinces responded with divergent requirements. And regulators now face expectations to oversee AI use in their sectors while adopting it internally, all without the legislative scaffolding, capability support, or institutional expertise required to do either credibly. 

Canada is developing the fragmented “mini‑EU” regulatory landscape that a national strategy was meant to prevent, but without building the regulator capability that makes such a system navigable.

A national strategy that never arrived

Canada’s federal AI legislation promised a risk‑based framework distinguishing high‑impact systems from routine applications, an AI and Data Commissioner with investigative authority, and mandatory impact assessments for systems posing significant risks. It spent years in development and drew international attention as a potential model for proportionate AI governance.

The bill never made it into law. Analysis from the Montreal AI Ethics Institute notes that AIDA “languished and died in a parliamentary committee, unable to secure the confidence and political will needed to proceed through the legislative process.”

The scope remained contested, definitions of “high‑impact system” proved difficult to settle, stakeholder engagement was limited, and the bill fell victim to broader political instability. Industry worried about compliance burdens without corresponding legal clarity. Civil society questioned enforcement mechanisms. By the time Parliament prorogued, AIDA had become politically unsalvageable.

The government’s current stance suggests it will not try again. Evan Solomon, appointed in May 2025 as Canada’s first Minister of Artificial Intelligence, has been explicit: “We will not over‑index on AI regulation. There exists a balance between what I perceive as the EU’s over‑regulation, which stifles innovation, and the more lenient approaches in the U.S. and China.”

The focus has shifted to narrower privacy concerns – children’s online safety, deepfakes, chatbot age restrictions – rather than comprehensive AI oversight. Federal Budget 2025 allocated $925 million to AI infrastructure and research, but nothing to regulatory capacity.

The decision leaves regulators to manage AI oversight within a fragmented landscape.

The decision reflects a political judgment about where risk and innovation pressures sit. But it leaves regulators to manage AI oversight within a fragmented landscape, using legislative tools not designed for algorithmic systems and without the institutional support AIDA might have provided.

Provincial fragmentation emerges

Provinces have responded to the federal vacuum by building their own AI frameworks. The result is jurisdictional divergence with limited coordination and growing complexity for entities operating nationally.

Quebec acted first. In August 2025, the Autorité des marchés financiers (AMF) published a draft guideline on AI use by financial institutions. The guideline is comprehensive: institutions must maintain central AI inventories, classify systems by risk, perform regular assessments, obtain customer consent for data uses, ensure explanations are available for AI‑influenced decisions, and apply governance controls throughout each system’s lifecycle.

Industry response was blunt. The Canadian Forum for Financial Markets (CFFiM) recommended “a pause and reset”, arguing that AI risks are already addressed by existing technology‑neutral frameworks – AMF’s own ICT and integrated risk management guidelines, plus OSFI’s Guideline E‑23 on Model Risk Management. 

Creating AI‑specific rules, the industry submission argued, fragments oversight by technology type and imposes prescriptive requirements – mandatory AI managers, annual reviews regardless of materiality – without clear evidence they improve risk management. Institutions subject to both federal and provincial oversight now face layered obligations that do not align neatly.

Ontario took a different path. 

Bill 194, which received Royal Assent in November 2024, mandates AI accountability frameworks and risk management for public bodies, with technical standards to follow by regulation. 

The Ontario Human Rights Commission called for stronger foundations – a statutory requirement that AI systems be valid, reliable, transparent, and rights‑affirming, plus mandatory impact assessments and an AI registry. 

None of these recommendations were adopted. Public sector institutions now face accountability obligations without explicit human‑rights safeguards, impact assessment requirements, or operational guidance on explainability.

The patchwork extends further. Quebec’s Law 25 requires transparency when decisions are made through automated processing. Several jurisdictions have introduced AI disclosure rules for recruitment. The C.D. Howe Institute warns of market fragmentation without federal coordination. 

Provincial divergence was predictable after AIDA’s collapse, but it creates real compliance challenges for multi‑jurisdictional entities and poses supervision challenges for regulators who must interpret overlapping, sometimes contradictory, requirements.

One regulator built the capability – most did not

The Canadian Securities Administrators (CSA) provides the exception. Between June and November 2025, the CSA used machine learning to identify and deactivate 6,918 fake URLs across 3,961 fraudulent investment websites. The system, procured by the Ontario Securities Commission (OSC) on behalf of the CSA and the Canadian Investment Regulatory Organization (CIRO), scans millions of reports daily using external machine‑learning models to detect fraud patterns in near real time.

This capability took seven years to build. From 2018, CSA members hired data scientists and blockchain specialists. An Enforcement Technology and Analytics Working Group was established to share tools and track AI developments. 

The Market Analysis Platform launched in 2020 to detect market abuse across venues. By December 2024, the CSA published Staff Notice 11‑348, consulting on AI governance, explainability, conflicts of interest, and the need for “adequate AI literacy” among both regulators and registrants.

Stan Magidson, CSA Chair and Chief Executive of the Alberta Securities Commission, framed the challenge directly: “The rapid evolution of AI provides opportunities and challenges for Canadian capital markets. Our goal is to support responsible innovation that benefits investors and market participants, while addressing risks associated with the use of these systems.”

The CSA achieved this by building AI capability internally, deploying it operationally, and developing oversight frameworks in parallel – without waiting for federal legislative permission.

Most Canadian regulators have not followed this path. 

The federal Regulators’ Capacity Fund supported 37 projects from 2020 to 2025, including AI and machine‑learning experiments in text analysis and triage. The fund closed in March 2025. An internal lessons‑learned report recommended continuation, but no replacement has been announced.

A 2025 KPMG survey found Canadians rank 44th out of 47 countries in AI training and literacy.

Canada’s broader AI literacy deficit compounds the problem. A 2025 KPMG survey found Canadians rank 44th out of 47 countries in AI training and literacy. Only 24 per cent reported AI training, versus 39 per cent globally. KPMG Canada CEO Benjie Thomas observed that “low literacy in AI is holding Canadians back from trusting the technology, and that’s a major barrier to adoption.”

The federal government’s Digital Talent Strategy acknowledges gaps but notes that implementation activities remain in early stages. Public job postings show generic IT roles but few AI governance or algorithmic audit positions.

The Office of the Privacy Commissioner has launched investigations into AI‑related practices and published principles for responsible AI use, but there is no public evidence of dedicated AI forensic capacity. 

OSFI’s guidance on technology and model risk is relevant to AI but generic, with no AI‑specific supervisory strategy published.

The Bank for International Settlements warns that “a lack of knowledge of relevant technologies and how financial institutions are using them could lead to blind spots in regulatory and supervisory frameworks.” 

In Canada, capability‑building has been sporadic and project‑based rather than systematic.

The asymmetry problem

The AMF’s draft guideline expects financial institutions to maintain AI inventories, classify systems by risk, test for bias, ensure explainability, and implement lifecycle governance. Ontario’s Bill 194 expects public bodies to establish accountability frameworks and comply with technical standards to be prescribed. And the CSA’s guidance expects registrants to manage model risk, document systems, and ensure staff understand AI tools.

Institutions are being asked to demonstrate sophisticated AI governance while many regulators lack the capability to assess whether that governance is effective. The financial industry’s submission on the AMF draft stressed technology‑neutral frameworks and warned against creating rules supervisors cannot practically apply. 

If a regulator requires institutions to classify AI systems by risk, it must understand those systems well enough to challenge the classifications.

KPMG’s analysis of AI auditing points to the “black box” character of many models and the difficulty of tracing decision paths. These challenges are significant even for private sector auditors with specialist teams. Public‑sector regulators with limited technical staff risk being unable to audit the systems they are mandated to supervise.

International experience provides a contrasting model. Australia’s APS AI Plan mandates AI training for public servants, requires Chief AI Officer appointments, and sets a December 2026 assessment deadline to evaluate progress. The UK’s FCA has deployed AI tools internally and tested supervision methods within existing accountability frameworks, treating AI capability as infrastructure that must be built inside the regulatory system.

Canada’s approach has been ad hoc. 

The CSA’s success demonstrates what deliberate capability‑building can achieve. But the closure of the Regulators’ Capacity Fund, the absence of a replacement, and the lack of a whole‑of‑government AI strategy together leave most regulators responsible for overseeing systems they may not fully understand, using tools they have not been trained to use.

What this means in practice

New federal privacy legislation is expected in 2026, likely incorporating elements of Bill C‑27’s privacy reforms around children’s online safety and automated decision‑making. 

Few expect AIDA to return. Provinces will continue developing their own AI frameworks. Fragmentation will likely deepen before any federal harmonisation is attempted.

Canadian regulators face three practical implications.

First, capability gaps will need to be addressed locally rather than through federal programmes. The CSA model – sustained hiring of specialists, dedicated working groups, integration of AI into core systems, guidance developed alongside internal deployment – is replicable, but requires budget, mandate, and leadership commitment. Regulatory authorities that wait for federal support may find themselves waiting a long, long time.

Second, procurement and supervisory posture become critical risk management tools. Regulators that lack in‑house AI expertise must decide whether to build it, buy it through contractors, or accept supervision limitations. Those choices shape institutional credibility. A regulator that issues AI guidance but cannot explain what “adequate explainability” means in practice will struggle when challenged.

Third, cross‑jurisdictional coordination becomes more valuable as fragmentation increases. Where federal frameworks are absent, informal coordination among provincial and sectoral regulators – sharing expertise, aligning supervisory expectations, pooling procurement – can mitigate some compliance complexity and capability gaps. The CSA’s working groups offer one template; sector‑specific forums offer others.

The authority risk

The deeper risk is erosion of regulatory authority. 

If regulators are seen to set expectations they cannot audit or enforce, industry and public confidence weakens. When the first major AI‑related harm occurs in a regulated sector, questions will focus not only on the firm involved but on whether the regulator had the tools, skills, and mandate to detect and prevent it.

Federal AI policy collapsed without producing a workable framework. Provinces filled the vacuum with divergent rules. Canadian regulators now manage oversight responsibilities without the capability infrastructure those responsibilities demand. 

That position is both understandable – given fiscal constraints and political uncertainty – and risky. Regulatory legitimacy depends on competence. Competence, in an AI‑saturated economy, depends on literacy, expertise, and institutional capacity that most Canadian regulators do not yet possess.

Picture of Paul Leavoy

Paul Leavoy

The Modern Regulator Managing Editor Paul Leavoy is a seasoned journalist and regulatory analyst with over two decades of experience writing about technology, public policy, and regulation.

POPULAR POSTS

Stay ahead of regulation

News, insight, and analysis weekly

STAY INFORMED