X/Twitter boss Elon Musk's defiance over the eSafety Commissioner's takedown orders unveils the hostility large platforms have always held for meaningful public accountability.
Subscribe now for unlimited access.
or signup to continue reading
Rather than exercise any resemblance of corporate social responsibility, the latest events of this saga demonstrate a level of contempt that has until now been hidden in plain view.
Violent, objectionable footage will continue to be spread online, leaving regulators to play a never ending game of whack-a-mole.
As the eSafety Commissioner and X/Twitter face off in a legal process, the limits of the Online Safety Act are being tested in real time.
There is a showdown here that is bigger than the eSafety commissioner and Musk.
![Large platforms are exploiting the gaps in Australia's patchy framework for tech accountability. Picture Shutterstock Large platforms are exploiting the gaps in Australia's patchy framework for tech accountability. Picture Shutterstock](/images/transform/v1/crop/frm/kDqE8LvSwvU8fyZkrZC97F/a4c3c9e5-119f-4ce2-896f-7df0733637e5.jpg/r0_0_7475_4934_w1200_h678_fmax.jpg)
It is a confrontation many years in the making between governments and big tech about power and responsibility.
Reputational risk, once held out as incentive enough for platforms to ensure user safety, has become an early casualty in this week's belligerent battle.
Indeed, Musk doubled-down on chasing a bad reputation, angling for more notoriety and baiting for online engagement in the process.
Appallingly, he has pointed his thousands-strong goon squad to attack the Commissioner herself in disgusting and sexist terms.
Large platforms are exploiting the gaps in Australia's patchy framework for tech accountability and goading the government to make the moves.
Over at Meta, their shameless withdrawal from the News Media Bargaining Code and manoeuvres to evade liability for negligent and harmful advertising systems in the courts is another example of this alarming trend.
These platforms are heaving with systemic risks that give rise to harms extending beyond surface-level content.
Content recommender systems (or algorithms) have been routinely found to actively promote content that is harmful to mental and physical health.
A recent Reset.Tech study found variable results in platform mitigations.
TikTok appears to block algorithmic distribution of eating disorder content, indicating it is operationally feasible.
However Instagram's algorithm was only partially effective at preventing this type of content from spreading into young people's feeds.
Over on X, the algorithm takes users from pro-eating disorder content to pro-suicide content after less than 40 posts.
It suits Musk and his peers to keep fighting reactive measures like content takedowns.
Proactive measures targeting their underlying systems would require a substantial redistribution in company resourcing.
It would mean that, for example, regulators could probe the supercharged advertising underbelly that milks Australians of millions of dollars in online scams.
A systemic approach, such as via an overarching "duty of care" would be a strong start.
We must move away from focusing on individual pieces of content and galvanise the foundations of the Online Safety Act so regulators have options beyond the time consuming notice-and-take-down model and the performance of information requests.
In its place could be a positive, legally enforceable obligation which compels platforms to mitigate against harms before they occur.
- Alice Dawkins is executive director of Reset.Tech