The Australian government’s battle with Elon Musk and X over violent content appears admirable, but will it change anything for those vulnerable to its harm?
By Terry Flew, The University of Sydney
May 11, 2024
Over three days in April, Australia’s delicate social media ecosystem was blown apart.
The thin line between online regulation and the circulation of disinformation unravelled. In its place now sits a new polarised debate between free speech absolutism and the safeguarding of users from violent content.
Two stabbing attacks in Sydney — including one which was live streamed — narrowed the focus on the significant role of social media platforms when public violence occurs.
After the attacks that killed six people and wounded many others in the Bondi Junction Westfield shopping mall on April 13, social media wrongly identified the assailant as a young Jewish university student. The allegation was picked up by one of Australia’s leading commercial TV networks before it was corrected, leading to racist and anti-Semitic online commentary about the individual because of the false rumours.
The stabbing of Bishop Mar Mari Emmanuel two days later at an Assyrian Church of Christ in south-western Sydney was live streamed to the world, sparking demand the raw, unedited footage be removed from social media platforms altogether.
Australia’s eSafety Commissioner ordered social media platform X to remove graphic videos of the church stabbing from its site.
While X complied with the requirement to delete the content from its Australian sites, it rejected the call to apply a global takedown. In doing so, it risks fines of AUD$782,500 per day for failing to comply with the directive by the eSafety Commissioner, Julie Inman Grant, to take down material that would be refused classification under the Classification Act.
X CEO Elon Musk mocked the Commissioner as “the Australian Censorship Commissar” and posted on his own profile an image associating X with a brightly-lit castle proclaiming free speech and truth and other platforms with a castle beset by dark clouds and lighting that pointed to censorship and propaganda.
Musk’s self-proclaimed stance as a “free speech warrior”, while inconsistent with his conduct towards critics of him or X, is a part of his global branding of X as “anti-woke”, which has seen him described as “the second most important person in MAGA“.
An unusual degree of political bipartisanship emerged in the responses of Australian political leaders.
Prime Minister Anthony Albanese called Musk an “arrogant billionaire, who thinks he’s above the law“, while Assistant Treasurer Stephen Jones described X as a “playground of criminals and cranks“.
Opposition Senate leader Simon Birmingham observed that “They (social media companies) absolutely should be able to quickly and effectively remove content that is damaging and devastating to the fabric of society,” while Greens communications spokesperson Sarah Hanson-Young described Musk as a “cowboy … making money and profiting off outrage and hatred.”
The conditions are now in place for a protracted battle between X and the Australian Government about whether compliance can be forced on the company to a directive of the eSafety Commissioner. Three issues can be identified as likely to play out.
First, there is the question of whether Australian Internet laws can be extended internationally. X has argued that the Australian eSafety Commissioner cannot demand a global takedown of content, as such decisions can only be made through international law. In response, the Commissioner argues that the use of VPNs and other devices to evade geo-blocking means that violent content that is clearly illegal under Australian law can still be accessed by Australians.
Second, it presents a significant challenge to a model of digital platform regulation that combines heavy fines for breaches of guidelines with the expectation that industry self-regulation and corporate social responsibility will mean that they will not be enacted in practice.
Such an approach was pioneered in the European Union as a way of “ratcheting up” platform conduct without seeking to directly regulate online content by putting in place penalties that were sufficiently severe to constitute a credible threat to the companies’ financial bottom line.
As eSafety Commissioner since 2015, Julie Inman Grant has referred to this approach in the Australian context as Safety by Design, working with tech companies to incorporate higher regulatory standards into their everyday business practice.
X’s experience in Australia draws attention to the limits of this “soft law” based approach. X withdrew from the Australian Code of Practice on Disinformation and Misinformation (ACPDM) administered by the Digital Industry Group Inc. (DIGI) after an adverse findingagainst it in a case undertaken by the advocacy group Reset.Tech Australia.
As a result, X effectively sits outside of the self-regulatory framework to which as Twitter, it had originally been a signatory. The company is clearly prepared to contest fines against it in the courts rather than choose the path of compliance assumed under the co-regulatory, safety-by-design-model.
The question arises as to whether the Australian Federal Government can, or should, set in place its own laws to govern X’s conduct with regard to issues such as misinformation or content regulation, given that the self-regulatory model has proven ineffectual in being able to enforce X’s compliance with industry guidelines. The Australian Government is reintroducing its Combating Misinformation and Disinformation Bill to Parliament after a consultation process that elicited over 2,400 submissions in response.
The relationship between proposed misinformation laws and existing powers under the Online Safety Act will be the subject of considerable debate, and recent developments have given new impetus to calls for governments to act to set rules for the conduct of global social media platforms.
Finally, the case of X and the global reach of Australian Internet laws points to a broader set of issues around national governments and global digital platforms. What has been termed the “regulatory turn” in Internet governance has seen governments increasingly seek to apply national laws to digital platform companies in areas such as competition policy, content regulation, dealings with content providers such as news publishers, and ethical issues related to the uses of artificial intelligence (AI).
The impetus for such measures has often been the sense that the global tech giants simply disregard requests to change and use their market power to steamroll governments.
Until now, this has only disempowered citizens seeking some form of agency against these tech giants. A change of posture from governments could help shift that narrative.
Terry Flew is a Professor of Digital Communication and Culture at the University of Sydney’s Faculty of Arts and Social Sciences. Professor Flew is leading a team of researchers in developing the International Digital Policy Observatory, an online database to track policies and regulations dealing with misinformation, AI regulation, online harms, cybersecurity and digital identity.
The research was undertaken with funding from the Australian Research Council through its Linkage Infrastructure, Equipment and Facilities (LIEF) program.
Subscribe to our newsletter.
Originally published under Creative Commons by 360info™.