Ambiguity in new live streaming laws
New laws have been passed by the Australian Parliament, after the Christchurch terror attack, to crack down on live streaming of violent material on social media platforms.
The laws come in the wake of a barrage of criticism and outrage after the shootings were live-streamed on Facebook and quickly spread across the internet, with some social media platforms allowing footage to remain online for hours.
The laws add two new criminal offences to the Commonwealth Criminal Code, with social media and hosting companies, as well as internet service providers (ISPs), now being liable in Australia for the content on their sites and servers.
The legislation was rushed through with no consultation and very little discussion – the bill was introduced on April 3, passed both houses of parliament on April 4 and was given Royal Assent on April 5.
This has been roundly criticised by industry and legal groups.
In a media statement, the Australian Industry Group said: “As we saw with the Encryption Law that was rushed through Parliament in December last year, industry and community concerns were not addressed and despite promises of subsequent amendment, remain unresolved to this day.
“Creation of new offences, regulation of media and extraterritorial laws raise legitimate questions that cannot be answered in a day or two.”
This was echoed by the Law Council of Australia, who said: “Proposed amendments to criminal legislation to deal with the live streaming of violent material on social media could have serious unintended consequences and should not be rushed through the parliament.
“…while steps should be taken to ensure social media is not weaponised to promote hatred and violence, proper consultation must occur to ensure fair and effective legislation.”
The laws specifically make it an offence for any content or hosting service, in or outside Australia, to fail to “expeditiously” remove any “abhorrent violent material” that has occurred or is occurring in Australia.
The penalties outlined in the legislation are severe.
An individual found to have breached the laws faces a fine up to 800 penalty units, currently equal to $128,952.
Corporations or companies found to be in breach face fines of up to 10% of their annual turnover.
On their 2018 turnover, Facebook could face a fine of anywhere up to $2.4b for failing to remove content ‘expeditiously’.
In a statement, Law Council President Arthur Moses SC said this is a flawed penalty: “Imposing penalties on companies based on their annual turnover rather than by reference to a maximum set of penalties is problematic and could lead to difficulties with sentencing. Companies will be punished by reference to their size rather than the seriousness of their breach.”
Another problem with the legislation is how companies are found to be in breach of the law.
The eSafety Commissioner now has the power to issue a notice to a company identifying pieces of material in breach of the law, and to demand their removal.
While this allows the government to act decisively to see content removed, the onus put on companies in the legislation is at odds with what the Attorney-General had initially outlined.
In a press release on April 4, Attorney-General Christian Porter said the notice issued would signal the company was held to be legally aware of the content.
“As soon as they receive a notice, they will be deemed to be aware of the material, meaning the clock starts ticking for the platform to remove the material or face extremely serious criminal penalties.”
The legislation reads that if a notice has been issued by the Commissioner:
then, in that prosecution, it must be presumed that the person was reckless as to whether the content service could be used to access the specified material at the time the notice was issued, unless the person adduces or points to evidence that suggests a reasonable possibility that the person was not reckless as to whether the content service could be used to access the specified material at the time the notice was issued.
The Attorney General has said the notice represents the start of the clock, with an individual or company then needing to remove the content as quickly as possible.
Under the legislation passed, the notice acts as a notification that the clock has run out of time and the company or individual is already assumed to have acted recklessly.
The bill’s explanatory memorandum then reads: “In most circumstances, if the content service provider were to ensure the expeditious removal of the material after receiving the notice, a prosecution would be unlikely.”
On the one hand, the Attorney General is saying the notice is one thing, on the other hand we have the law saying it’s another, but then the memorandum says it’s “unlikely” to matter anyway.
This gives the feeling the legislation has serious flaws due to the rushed nature of its introduction and assent into law.
The legislation also requires ISPs to notify the Australian Federal Police if they know their service can be used to access materials in breach of the law.
ISPs could now be required to monitor all their user data in order to avoid facing crippling financial penalties, something likely to cause concern in privacy and civil rights groups.
Writing in The Conversation, Andre Oboler, Senior Law Lecturer at La Trobe University, argued another flaw is the lack of a definition of what “expeditious removal” actually means.
While the law puts pressure on companies to improve response times and act pre-emptively, it lacks a way to measure those response times.
“If we can’t measure it, we can’t push for it to be continually improved,” wrote Oboler.
“Rapid removal should be required after a notice from the eSafety Commissioner, perhaps removal within an hour. Fast removal, for example within 24 hours, should be required when reports come from the public.
“The exact time lines that are possible should be the subject of consultation with both industry and civil society. They need to be achievable, not merely aspirational.”