In 2016, a botnet called Mirai compromised more than 600,000 internet-connected devices – cameras, routers, digital video recorders – by trying a list of more than 60 factory-default username and password combinations. The result was one of the largest distributed denial-of-service attacks the internet had ever seen, knocking major websites offline across the United States and Europe. One Chinese manufacturer, Xiongmai Technologies, recalled a portion of its products. Most did not.
The devices were not broken. They worked exactly as designed. They just happened to ship with the digital equivalent of a lock that opens to any key.
A decade later, governments have started treating that kind of negligence the way they treat a toaster without a safety switch. On 4 March 2026, Australia’s smart device security standards took effect, requiring manufacturers of consumer IoT products to eliminate default passwords, publish a vulnerability disclosure channel, and tell buyers how long security updates will last. The United Kingdom has had equivalent rules in force since April 2024. The European Union’s Cyber Resilience Act will impose far broader obligations from late 2027.
“An insecure connected device is a defective product.”
The shift these laws represent is more conceptual than technical. Governments have decided that an insecure connected device is a defective product – and that cybersecurity, for a growing class of consumer goods, is a product safety obligation.
That is the right call. The harder question is whether the agencies now responsible for making it real are equipped to do so.
The wrong department for the job
Consider the institutional choices at hand.
Australia gave enforcement to the Department of Home Affairs – a national security agency. The UK gave it to the Office for Product Safety and Standards (OPSS) – a product safety regulator that already runs market surveillance, conducts testing, and manages recalls. The EU will distribute it across 27 national market surveillance authorities, many of which currently handle everything from toy safety to electrical equipment.
Each choice carries a different assumption about what enforcing cybersecurity-as-product-safety actually looks like in practice. And none of them is obviously correct.
A national security department understands threat landscapes and attack vectors. What it does not typically possess is a programme for pulling non-compliant consumer goods off retail shelves. A product safety regulator knows how to test, inspect, and recall. What it may lack is the technical depth to assess whether a device’s vulnerability disclosure process is genuine or performative. And a market surveillance authority already stretched across thousands of product categories may not have capacity to absorb an entirely new class of digital compliance.
The UK’s experience is instructive. Nearly two years after the PSTI Act came into force, OPSS’s published enforcement record does not include any actions identified as taken under the consumer connectable product security regulations. That does not prove nothing has happened – publication may lag, and early engagement may be informal. But it does mean that on the public record, a regime backed by penalties of up to £10 million or 4% of global revenue has been quiet.
A regime backed by penalties of up to £10 million has been quiet.
OPSS describes its posture as “risk-based, pragmatic and proportionate”, taking into account “the maturity of the legislation”. That is a reasonable position in the early phase of a new regime. It is also a posture that, if extended indefinitely, risks making the law decorative.
Australia’s regime is a week old. Whether Home Affairs develops the market surveillance capacity to conduct proactive checks – or relies on a reactive, complaint-driven model – will determine whether the Australian standard becomes something manufacturers take seriously or merely certify against.
The countries still watching
Canada and New Zealand have moved on cybersecurity, but not for consumer devices. Canada’s Bill C-8 targets critical infrastructure operators and telecoms providers with penalties of up to C$15 million per contravention. New Zealand’s new Cyber Security Strategy and accompanying consultation propose mandatory obligations for providers of seven essential services, with director liability of up to NZ$500,000 for the most serious breaches. Both focus on critical systems. Neither creates a mandatory baseline for the devices consumers actually buy.
The implicit assumption is that products manufactured to meet Australian, UK, or EU standards will flow to every market. That assumption is comforting but not reliable. Regulatory arbitrage in product safety is well documented. Manufacturers routinely produce compliant versions of a product for regulated markets and lower-specification versions for those without requirements. A smart camera sold in Sydney under mandatory security standards may not be the same smart camera sold in Auckland or Toronto under none.
The regulator next door
The most consequential audience for this shift may be regulators who do not think of themselves as cyber agencies at all.
Smart meters now sit in energy networks. Connected diagnostic devices populate hospital wards. IoT-enabled payment terminals process financial transactions. A cybersecurity failure in any of those products is simultaneously a device defect, a service reliability event, and potentially a privacy breach. Which regulator owns the response?
In Australia, the answer is fragmented. Home Affairs holds the cybersecurity enforcement power. The Australian Competition and Consumer Commission (ACCC) handles product safety, the Office of the Australian Information Commissioner (OAIC) deals with privacy, and the Australian Communications and Media Authority (ACMA) covers communications equipment. Sector regulators supervise the services that depend on all of these devices. The coordination mechanisms between them – referral pathways, shared intelligence, memoranda of understanding – are not yet visible.
That gap matters because the first serious enforcement test is unlikely to arrive neatly labelled as a “smart device cybersecurity issue.” It will arrive as a parent discovering a stranger watching through a baby monitor. As a hospital network disrupted by a compromised diagnostic sensor. As a building management system hijacked through an insecure router. The technical root cause will be cybersecurity. The visible harm will be something else entirely.
The real test
The question is not whether “secure by design” is a good idea. The Mirai botnet answered that in 2016. Nor is it whether mandatory standards are justified – the UK, the EU, and now Australia have all concluded they are.
The question is whether the enforcement architecture matches the ambition. Whether a national security department can learn to run product recalls. Whether a product safety regulator can develop the technical depth to audit firmware. Whether 27 national market surveillance authorities can absorb digital product compliance on top of everything else they already do.
Two years into the UK’s regime, the public record offers no clear answer.
Two years into the UK’s regime, the public record offers no clear answer. Australia’s regime is a week old. The EU’s full requirements do not apply until December 2027.
A standard on paper proves little; enforcement is the real test.
Your smart lock has a safety rating. Why not a security rating?
In 2016, a botnet called Mirai compromised more than 600,000 internet-connected devices – cameras, routers, digital video recorders – by trying a list of more than 60 factory-default username and password combinations. The result was one of the largest distributed denial-of-service attacks the internet had ever seen, knocking major websites offline across the United States and Europe. One Chinese manufacturer, Xiongmai Technologies, recalled a portion of its products. Most did not.
The devices were not broken. They worked exactly as designed. They just happened to ship with the digital equivalent of a lock that opens to any key.
A decade later, governments have started treating that kind of negligence the way they treat a toaster without a safety switch. On 4 March 2026, Australia’s smart device security standards took effect, requiring manufacturers of consumer IoT products to eliminate default passwords, publish a vulnerability disclosure channel, and tell buyers how long security updates will last. The United Kingdom has had equivalent rules in force since April 2024. The European Union’s Cyber Resilience Act will impose far broader obligations from late 2027.
The shift these laws represent is more conceptual than technical. Governments have decided that an insecure connected device is a defective product – and that cybersecurity, for a growing class of consumer goods, is a product safety obligation.
That is the right call. The harder question is whether the agencies now responsible for making it real are equipped to do so.
The wrong department for the job
Consider the institutional choices at hand.
Australia gave enforcement to the Department of Home Affairs – a national security agency. The UK gave it to the Office for Product Safety and Standards (OPSS) – a product safety regulator that already runs market surveillance, conducts testing, and manages recalls. The EU will distribute it across 27 national market surveillance authorities, many of which currently handle everything from toy safety to electrical equipment.
Each choice carries a different assumption about what enforcing cybersecurity-as-product-safety actually looks like in practice. And none of them is obviously correct.
A national security department understands threat landscapes and attack vectors. What it does not typically possess is a programme for pulling non-compliant consumer goods off retail shelves. A product safety regulator knows how to test, inspect, and recall. What it may lack is the technical depth to assess whether a device’s vulnerability disclosure process is genuine or performative. And a market surveillance authority already stretched across thousands of product categories may not have capacity to absorb an entirely new class of digital compliance.
The UK’s experience is instructive. Nearly two years after the PSTI Act came into force, OPSS’s published enforcement record does not include any actions identified as taken under the consumer connectable product security regulations. That does not prove nothing has happened – publication may lag, and early engagement may be informal. But it does mean that on the public record, a regime backed by penalties of up to £10 million or 4% of global revenue has been quiet.
OPSS describes its posture as “risk-based, pragmatic and proportionate”, taking into account “the maturity of the legislation”. That is a reasonable position in the early phase of a new regime. It is also a posture that, if extended indefinitely, risks making the law decorative.
Australia’s regime is a week old. Whether Home Affairs develops the market surveillance capacity to conduct proactive checks – or relies on a reactive, complaint-driven model – will determine whether the Australian standard becomes something manufacturers take seriously or merely certify against.
The countries still watching
Canada and New Zealand have moved on cybersecurity, but not for consumer devices. Canada’s Bill C-8 targets critical infrastructure operators and telecoms providers with penalties of up to C$15 million per contravention. New Zealand’s new Cyber Security Strategy and accompanying consultation propose mandatory obligations for providers of seven essential services, with director liability of up to NZ$500,000 for the most serious breaches. Both focus on critical systems. Neither creates a mandatory baseline for the devices consumers actually buy.
The implicit assumption is that products manufactured to meet Australian, UK, or EU standards will flow to every market. That assumption is comforting but not reliable. Regulatory arbitrage in product safety is well documented. Manufacturers routinely produce compliant versions of a product for regulated markets and lower-specification versions for those without requirements. A smart camera sold in Sydney under mandatory security standards may not be the same smart camera sold in Auckland or Toronto under none.
The regulator next door
The most consequential audience for this shift may be regulators who do not think of themselves as cyber agencies at all.
Smart meters now sit in energy networks. Connected diagnostic devices populate hospital wards. IoT-enabled payment terminals process financial transactions. A cybersecurity failure in any of those products is simultaneously a device defect, a service reliability event, and potentially a privacy breach. Which regulator owns the response?
In Australia, the answer is fragmented. Home Affairs holds the cybersecurity enforcement power. The Australian Competition and Consumer Commission (ACCC) handles product safety, the Office of the Australian Information Commissioner (OAIC) deals with privacy, and the Australian Communications and Media Authority (ACMA) covers communications equipment. Sector regulators supervise the services that depend on all of these devices. The coordination mechanisms between them – referral pathways, shared intelligence, memoranda of understanding – are not yet visible.
That gap matters because the first serious enforcement test is unlikely to arrive neatly labelled as a “smart device cybersecurity issue.” It will arrive as a parent discovering a stranger watching through a baby monitor. As a hospital network disrupted by a compromised diagnostic sensor. As a building management system hijacked through an insecure router. The technical root cause will be cybersecurity. The visible harm will be something else entirely.
The real test
The question is not whether “secure by design” is a good idea. The Mirai botnet answered that in 2016. Nor is it whether mandatory standards are justified – the UK, the EU, and now Australia have all concluded they are.
The question is whether the enforcement architecture matches the ambition. Whether a national security department can learn to run product recalls. Whether a product safety regulator can develop the technical depth to audit firmware. Whether 27 national market surveillance authorities can absorb digital product compliance on top of everything else they already do.
Two years into the UK’s regime, the public record offers no clear answer. Australia’s regime is a week old. The EU’s full requirements do not apply until December 2027.
A standard on paper proves little; enforcement is the real test.
Paul Leavoy
RELATED POSTS
New Zealand’s online casino gamble: closing
April 2026 regulatory update: first evidence
Your smart lock has a safety
POPULAR POSTS
New Zealand’s online casino gamble: closing a gap, or opening a new one?
April 2026 regulatory update: first evidence
Your smart lock has a safety rating. Why not a security rating?
Stay ahead of regulation
News, insight, and analysis weekly