Australia’s national plan says existing laws are enough to regulate AI. This is false hope

(Canva)

Jessica Russ-Smith, Australian Catholic University; Immaculate Motsi-Omoijiade, Charles Sturt University, and Michelle D. Lazarus, Monash University

December 15, 2025

Earlier this month, Australia’s long-anticipated National AI Plan was released to a mixed reception.

The plan shifts away from the government’s previously promised mandatory AI safeguards. Instead, it’s positioned as a whole-of-government roadmap for building an “AI-enabled economy”.

The plan has raised alarm bells among experts for its lack of specificity, measurable targets, and clarity.

Globally, incidents of AI harm are growing. From major cyber crime breaches using deepfakes to disinformation campaigns fuelled by generative AI, the lack of accountability is staggering. In Australia, AI-generated child sexual abuse material is rapidly spreading, and existing laws are failing to protect victims.

Without dedicated AI regulation, Australia will leave the most vulnerable at risk of harm. But there are frameworks elsewhere in the world that we can learn from.

No dedicated AI laws in Australia

The new plan doesn’t mandate for a standalone AI Act. It also doesn’t have concrete recommendations for reforms to existing laws. Instead, it establishes an AI Safety Institute and other processes including voluntary codes of conduct.

According to Assistant Minister for Science, Technology and the Digital Economy Andrew Charlton, “the Institute will be [..] working directly with regulators to make sure we’re ready to safely capture the benefits of AI with confidence.” However, this institute has only been afforded guidance and advisory powers.

Australia also has a history of blaming algorithms for legal failures, such as the Robodebt scandal. Current legal protections aren’t enough to manage existing and potential AI harms. As a result, the new AI plan risks amplifying injustices.

Legal whack-a-mole

Holding tech companies legally liable is no easy feat.

Big tech consistently seeks loopholes in existing legal systems. Tech giants Google and OpenAI are claiming “fair use” provisions in US copyright law legalise data scraping.

Social media companies Meta and TikTok are exploiting existing laws – such as broad immunity under US Communications Decency Act – to avoid liability for harmful content.

Many are also using special purpose acquisition companies (essentially shell companies) to circumvent antitrust laws that target anti-competitive conduct.

As per the new national plan, Australia’s “technology-neutral” approach argues that existing laws and regulations are sufficient to combat potential AI harms.

According to this line of thinking, concerns such as privacy breaches, consumer fraud, discrimination, copyright and workplace safety can be addressed using a light touch – regulation only where necessary. And the AI Safety Institute would be “monitoring and advising”.

The existing laws referenced as sufficient include the privacy act, Australian consumer law, current anti-discrimination, copyright and intellectual property laws, as well as sector-specific laws and standards, such as those in the medical field.

This might appear as comprehensive legal oversight. But there remain legal gaps, including those related to generative AI, deepfakes, and synthetic data made up for AI training.

There are also more foundational concerns around systemic algorithmic bias, autonomous decision-making and environmental risk. A lack of transparency and accountability looms large, too.

Big tech often uses legal uncertainty, lobbying and technical complexity to delay compliance and sidestep responsibility. The companies adapt while the legal system attempts to catch up – like a game of whack-a-mole.

A call to action for Australia

Just like the moles in the game, big tech often engages in “regulatory arbitrage” to circumvent the law. This means shifting to jurisdictions with less stringent laws. Under the current plan, this is now Australia.

The solution? Global consistency and harmonisation of relevant laws, to cut down on the number of locations big tech can exploit.

Two frameworks in particular offer lessons. Harmonising Australia’s national AI plan with the EU AI Act and Aotearoa New Zealand’s Māori AI Governance framework would enhance protections for all Australians.

The EU AI Act was the world’s first AI-specific legislation. It provides clear rules on what is allowed and not allowed. AI systems are assigned legal obligations and responsibilities based on the level of potential societal risk they pose.

Shop Amazon Deals

The act puts in place various enforcement mechanisms. This includes specific financial penalties for non-compliance, as well as EU- and national- level governance and surveillance bodies.

Meanwhile, the Māori AI Governance Framework outlines Indigenous data sovereignty principles. It highlights the importance of Māori data sovereignty in the face of inadequate AI regulation.

The framework includes four pillars that provide comprehensive action to support Māori data sovereignty, the health of land, and community safety.

The EU AI Act and the Māori Framework articulate clear values and translate them into specific protections: one through enforceable risk-based rules, the other through culturally-grounded principles.

Meanwhile, Australia’s AI plan claims to reflect “Australian values” but provides neither regulatory teeth nor cultural specificity to uphold them. As legal experts have called for, Australia needs AI accountability structures that don’t rely on individuals successfully prosecuting well-resourced corporations through outdated laws.

The choice is clear. We can either chase an “AI-enabled economy” at any cost, or build a society where community safety, not money, comes first.

Jessica Russ-Smith, Associate Professor of Social Work and Chair, Indigenous Research Ethics Advisory Panel, Australian Catholic University; Immaculate Motsi-Omoijiade, Senior Research Fellow – Responsible AI Lead, AI and Cyber Futures Institute, Charles Sturt University, and Michelle D. Lazarus, Director, Centre of Human Anatomy Education, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

0 Shares