x
Close

Canada’s securities regulators are quietly winning the enforcement technology race

Hands using a laptop to monitor and disrupt online investment fraud

Between June and November 2025, the Canadian Securities Administrators (CSA) disarmed 3,961 fraudulent investment websites – nearly 4,000 digital traps designed to fleece retail investors through fake trading platforms and cryptocurrency scams. 

The scale is striking, but the method is more so: machine learning algorithms scanning millions of reports daily, flagging suspicious sites, then coordinating with internet service providers to shut them down before most investors ever see them.

The work is proactive – sites removed before they cause harm, rather than prosecutions launched after the damage is done. And it is happening while Canada’s federal artificial intelligence policy remains in limbo, with its proposed AI legislation shelved in early 2025 and its dedicated fund for building regulator capacity quietly exhausted in March with no replacement announced.

Federal policymakers have spent years debating how to regulate AI. Canada’s securities regulators have simply deployed it.

The contrast is sharp. Federal policymakers have spent years debating how to regulate AI. Canada’s securities regulators have simply deployed it. 

For regulators elsewhere facing the same dilemma – whether to use advanced technology before formal frameworks exist – the CSA’s approach offers a practical answer: enforcement effectiveness increasingly depends on speed, infrastructure, and partnerships, not statutory expansion.

The fraud problem that enforcement couldn’t solve

Investment fraud has long existed, but the internet and artificial intelligence have industrialised it. 

Scammers now generate slick websites with live trading charts, chatbot plugins, fake regulatory badges, and AI-written testimonials at a pace traditional enforcement cannot match. By the time a regulator investigates, prosecutes, and wins a court order, the perpetrators have moved on – often to a dozen new domains.

Stan Magidson, chair of the Canadian Securities Administrators and chief executive of the Alberta Securities Commission, put it plainly: “Aided by advancements in technology, the number of scam investment websites has grown significantly in recent years”. The implication is clear – regulatory tools built for a slower, analogue world are insufficient.

The Australian Securities and Investments Commission (ASIC) learned this early. Since deploying its own fraud detection capability roughly two years ago, ASIC has knocked out more than 14,000 investment scam and phishing websites, averaging 130 takedowns per week. Sarah Court, ASIC’s deputy chair, has noted that “traditional toolkit – investigations, court actions, administrative actions – are important, but they can’t combat the scourge of online scams on their own”.

In May 2025, the International Organization of Securities Commissions called on internet platform providers to conduct due diligence on unauthorised offerings and develop internal fraud detection systems. In March, IOSCO launched the International Securities & Commodities Alerts Network (I-SCAN), a global warning system where investors and platform providers can check whether a company has been flagged by regulators worldwide. The message from the international regulatory community is consistent: enforcement speed now depends on technology.

How Canada’s securities regulators built the capability

The CSA’s fraud detection system rests on infrastructure the regulator has been building since 2018, when it began hiring data scientists, analysts, and blockchain specialists to handle the influx of cryptocurrency and algorithmic trading. In September of that year, the CSA selected Kx, a division of First Derivatives, to build the Market Analysis Platform (MAP) – a centralised data repository and analytics system allowing all CSA members to identify and analyse market misconduct across Canadian exchanges. MAP went live in October 2020.

Parallel to that, the CSA established an Enforcement Technology and Analytics Working Group to facilitate information sharing on technology use across enforcement teams, track developments in AI and machine learning, and develop detection tools. The group has conducted training on mobile forensics and open-source intelligence, and developed a reference framework for establishing digital forensic laboratories in cloud-based environments.

By December 2024, the CSA was confident enough in its understanding of AI to publish guidance on how securities laws apply to AI systems used in capital markets. The guidance addresses explainability requirements, human oversight, and the challenge of “black box” algorithms that offer high performance but low transparency.

Notably, the CSA applied an “activity not technology” principle – it regulates conduct, not the technology itself. This approach sidesteps the legislative paralysis that has stalled broader Canadian AI policy and allows the regulator to use AI tools internally even as it shapes expectations for how market participants deploy them.

When the Ontario Securities Commission led the procurement and testing of the fraud detection capability on behalf of the CSA and the Canadian Investment Regulatory Organization, it drew directly on that foundation.

The system itself relies on an external technology service provider, whose identity the CSA has not disclosed. The provider’s algorithms scan millions of reports daily, flagging websites that display hallmarks of fraud, including fake regulatory credentials, cloned branding of legitimate firms, or suspicious domain registration patterns. Once flagged, the CSA works with internet service providers to deactivate the sites or block access to them. Between 5 June and 23 November, 2025 the system identified and deactivated 6,918 fake URLs spanning 3,961 distinct websites.

Grant Vingoe, chair of the CSA’s Policy Coordination Committee and chief executive of the Ontario Securities Commission, described the results as proof of concept: “The success of this initial phase shows that this technology can make a real difference, and the results show the impact it is already having”.

The federal policy vacuum

While the CSA has operationalised machine learning for enforcement, Canada’s broader effort to regulate AI has collapsed. Bill C-27, which included the Artificial Intelligence and Data Act, promised a risk-based framework with an AI and Data Commissioner empowered to oversee high-impact systems. The bill was shelved in early 2025 after being criticised as overly cautious and for inadequate stakeholder consultation.

The real failure is one of misaligned priorities. The federal government’s Regulators’ Capacity Fund, which allocated C$10 million between 2019 and 2022 to help regulators build capabilities in economic analysis, competitiveness assessment, and regulatory experimentation, exhausted its budget in March 2025 with no renewal announced. Meanwhile, the 2025 federal budget committed C$925 million to AI infrastructure – supercomputers and data centres – but offered no corresponding investment in regulatory capability or legislative clarity. Canada is building computational power while letting institutional capacity atrophy.

What remains is fragmentation. Quebec has enacted privacy reforms that touch on AI. Ontario has mandated AI disclosure standards for the public sector. Quebec’s financial regulator, the Autorité des marchés financiers, has published draft guidelines on AI use by financial institutions. With no federal framework, Canada is developing what one recent analysis called a “mini-EU landscape” of 13 distinct provincial and territorial regulatory regimes.

Operational AI adoption by regulators does not require legislative permission or centralised capacity funds – but it does require sustained investment in talent, technology, and partnerships.

Against this backdrop, the CSA’s achievement stands out. It demonstrates that operational AI adoption by regulators does not require legislative permission or centralised capacity funds – but it does require sustained investment in talent, technology, and partnerships.

What this means for regulatory practice

The CSA’s approach offers lessons, but not a universal template. The capability is replicable in its governance structure: the Ontario Securities Commission led a centralised procurement on behalf of multiple CSA members and the Canadian Investment Regulatory Organization, pooling resources and avoiding duplication. The partnership with internet service providers to execute rapid takedowns is also transferable. So too is the principle of building foundational analytics infrastructure – like MAP – before deploying specialised tools.

What is less replicable is context. Canada’s securities regulators benefit from federal-provincial coordination mechanisms, a relatively concentrated capital markets ecosystem, and existing data infrastructure. Smaller or more siloed agencies may struggle to assemble the same institutional preconditions.

The CSA has also not abandoned human oversight. Its December 2024 AI guidance acknowledges the tension between explainability and advanced capability – some of the most powerful machine learning models are also the hardest to interpret. The CSA’s answer is not to ban black-box systems, but to require alternative testing mechanisms, bias detection tools, and clear accountability when AI-driven decisions cause harm. This reflects a pragmatic understanding: regulators will use AI, and so will the entities they regulate. The challenge is ensuring both do so responsibly.

The governance challenge

The CSA’s fraud detection capability is in its “initial phase”, according to Vingoe. The regulator has not disclosed the cost, the duration of its contract with the technology provider, or the rate of false positives. Those details will matter as the system scales.

So too will questions about accountability. How much human review happens before a website is flagged for deactivation? What recourse exists for entities incorrectly identified as fraudulent? The speed of automated takedowns is a feature, but it also compresses the window for error correction. ASIC’s experience offers a preview: with 130 sites removed weekly, even a modest false positive rate could create reputational or due process risks.

The CSA will need to develop robust appeal mechanisms, transparent criteria for flagging, and regular audits of algorithmic accuracy. These safeguards do not need to slow the system down, but they do need to exist. Regulators using AI to enforce rules in unregulated or lightly regulated AI environments carry a particular burden: they must demonstrate that their own use of the technology meets the standards of fairness and accountability they would demand of others.

International momentum

Internationally, the CSA is not alone. ASIC has been removing scam websites at scale for two years and expanded its capability in August 2025 to include social media advertisements. The UK’s Financial Conduct Authority has led crackdowns on illegal “finfluencers”, resulting in more than 650 takedown requests and the removal of over 50 unauthorised websites. IOSCO’s I-SCAN system now allows regulators to share warnings globally, collapsing the information gap that scammers exploit when they move between jurisdictions.

What distinguishes the CSA is the speed at which it has caught up. ASIC’s 14,000 takedowns span two years; the CSA deactivated nearly 4,000 websites in six months. That velocity suggests the underlying technology is maturing and that regulators entering the field now can benefit from the groundwork laid by early movers.

What regulators should do next

For agencies considering similar capabilities, the CSA’s experience points to a clear sequence. Invest early in data infrastructure and specialist staff. Centralise procurement where possible to avoid duplication and leverage scale. Prioritise partnerships with internet intermediaries, who control the infrastructure necessary for rapid enforcement. Accept that AI systems will be imperfect and design accountability mechanisms accordingly.

Most importantly, regulators should recognise that the debate over whether to use AI is largely over. The question now is how to use it responsibly, transparently, and effectively. 

Canada’s securities regulators have shown that enforcement capability can be built while policy debate continues – a pragmatic answer to a dilemma that will only grow more pressing as technology accelerates and legislative processes lag behind.

Picture of TMR Editorial Staff

TMR Editorial Staff

Our editors bring clarity and rigour to fast-moving regulatory developments through trusted sources and informed analysis.

POPULAR POSTS

Stay ahead of regulation

News, insight, and analysis weekly

STAY INFORMED