QUAY INSIGHTS

July 2023

The Australian Government’s proposals to combat mis- and disinformation:
Are they enough to address the world of generative AI?

On 25 June 2023, the Minister for Communications, Michelle Rowland commenced a consultation process for the grant of new regulatory powers to the Australian Communications and Media Authority to police online mis- and disinformation. The question arises, in the context of the new world of generative AI, will those powers be enough to limit the potential harm that may be caused by this new technology?

What is online mis- and disinformation?

Before discussing the regulation of online mis- and disinformation, it is important to understand what these terms mean.

Online misinformation is, in essence, any information that is spread online that is false – whether it is cures for COVID-19 which are not backed by any medical evidence or doctored photos of celebrities. The intent behind the sharing of this content does not need to be nefarious. It includes the dissemination of false content even where the person sharing it erroneously believes it to be true.

Disinformation, on the other hand, involves the sharing of false information with malicious intent. In other words, the content is provided with the intention of causing harm. That harm may be to induce unsuspecting individuals to part with money in scams, to cause fear or confusion or to increase prejudice, amongst other purposes.

What Australian laws regulate mis- and disinformation?

Even though the Government is now considering additional laws, that does not mean that the spread of online mis- and disinformation is currently unregulated.

For example, false, misleading and deceptive conduct is regulated under the Australian Consumer Law (ACL). The ACL applies equally in the real world and online. In this context, the Australian Competition & Consumer Commission (ACCC) has commenced proceedings against Meta alleging that Meta engaged in false, misleading or deceptive conduct through the publication of scam crypto currency ads, which are a classic type of disinformation.

Another example of existing regulation is the ACCC’s new National Anti-Scam Centre, which is to be phased in from 1 July 2023, though this will not be limited only to combatting online disinformation.

By comparison, while in some cases this may be captured by laws such as Australia’s anti-discrimination legislation, there is less regulation in relation to online disinformation disseminated for the purpose of sowing societal disharmony or fear.

Australia’s voluntary code

As far back as the ACCC’s Digital Platforms Inquiry, the final report from which was released in 2019, concern has been expressed by Australian regulators about the risks posed by online mis- and disinformation. In its final report, the ACCC raised concerns that widespread dissemination of this content has the potential to impact the outcome of elections as well as having the potential to undermine the trust consumers place in online news sources more generally.

In its final report from the Digital Platforms Inquiry the ACCC recommended that, to address the risks arising from this type of content, larger digital platforms should be required to implement an industry code to allow for complaints to be made in relation to serious forms of disinformation and to enable the most egregious content to be taken off the relevant platform.

The then Government broadly supported that ACCC recommendation and, ultimately, in February 2021, the industry group, Digital Industry Group Inc (also known as DIGI) published the “Australian Code of Practice on Disinformation and Misinformation”.1 That voluntary code, which has only eight signatories, is limited in scope, and does not cover all of the matters envisaged by the ACCC. For example:

  • the type of “harm” that mis- or disinformation must cause before the code applies is set at a very high standard. The relevant content must be “verifiably” false (a concept that is not defined) and must pose a credible and serious threat to democratic political and policymaking processes or “public goods”;
  • while complaints may be made about such content, no signatory is required to take any action in response to a complaint about mis- or disinformation; and
  • in the case of advertising on a platform, which each signatory has direct control over, the only obligation is to use “commercially reasonable efforts” to “deter” advertisers from repeatedly placing ads that constitute mis- or disinformation.

In short, the voluntary code provides little protection for Australians.

The Australian Communications and Media Authority (ACMA) released a review of the code in 2021 which, while it commended the platform signatories for taking some action to combat the spread of mis- and disinformation, noted that Australians remained very concerned by the spread of this type of online content and pointed to numerous flaws with the voluntary code.

The scope of the proposed new law

The proposed new law, the Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill:2

  • provides the ACMA with powers to obtain information from digital platforms in relation to mis- and disinformation;
  • allows the ACMA to require the development of an industry mis- and disinformation code, which the ACMA would be able to register and enforce; and
  • if an industry code was found to be inadequate, the ACMA could implement regulation through a standard or similar.

The ACMA would not have the power to require content to be removed from digital platforms.

As can be seen, the ACMA does not in fact receive significant powers to police this content under the Bill. Before the ACMA could ask for an industry code to be implemented, it would need to do even further work to demonstrate that the current voluntary code has been ineffective. That process itself might take a year or more.

As in the case of the current voluntary code, the definitions used in the draft Bill are also problematic. For example, for content to be considered misinformation, the provision of the content must be reasonably likely to cause or contribute to “serious harm”. Disinformation must meet that threshold and must also be intentionally disseminated with the intent to deceive. Leaving aside the question of how platforms could determine the intention underlying the dissemination of the online content, the definition of serious harm raises serious questions.

Serious harm is harm that affects a significant portion of the Australian population, economy or environment or undermines the integrity of an Australian democratic process. This is a very high threshold, meaning that not all types of problematic content will be captured. For example, content that promotes discrimination based on race, sexuality or other characteristics will only be caught if it promotes “hatred”. To take another example, would disinformation spread about bushfires (as is alleged in connection with the terrible 2019 bushfires that impacted so many Australians) be sufficient to constitute “harm to the health of Australians”, and therefore regulated? Or could it be the case that, where such disinformation only directly impacted the safety of a small number of Australians, it would not be considered to be sufficiently serious to be regulated under the measures proposed in the Bill? While of course freedom of speech is a relevant consideration, the more appropriate threshold for the types of content to be regulated should be lower.

Another concern with the Bill is the potential civil penalties, which it is anticipated the ACMA would only seek to impose in the case of systemic ongoing non-compliance. In the case of corporations, non-compliance with any code that the ACMA may ultimately register is proposed to have a maximum civil penalty of the greater of Australian $2.75 million or 2% of global turnover and a breach of a standard or other regulation made by the ACMA would have a maximum civil penalty of the greater of Australian $6.88 million or 5% of global turnover. These penalties are less than the civil penalties the Australian Information Commissioner may seek for breach of the Privacy Act 1988 and also significantly less than the maximum civil penalties for breach of the Australian Consumer Law. This sends a signal that the Australian Government does not truly consider the spread of mis- or disinformation is a public policy concern.

The proposals need to be updated to address the new world of generative AI

ChatGPT, and other forms of generative artificial intelligence (AI) chatbots, were not available to Australians when the ACCC undertook its Digital Platforms Inquiry, or when the former Australian Government first endorsed the ACCC’s recommendation for a voluntary code to regulate online mis- and disinformation. ChatGPT was also not generally commercially available at the time of the election of the current Government in 2022, which promised during its election campaign to take action in relation to this harmful content.3

ChatGPT was first launched in November 2022. Even though it is a new technology, the concerns arising from the use of generative AI technology are clear. This type of generative AI, in its different forms, has the potential to exponentially increase the volume and reach of online mis- and disinformation. It is appropriate that the Australian Government takes action to ensure that the proposed new legislation also regulates the use of generative AI to spread mis- and disinformation, which is currently out of scope of the Bill. For example, unless separately designated, a generative AI chatbot operated as a stand alone platform not linked to, say, a search service would not be caught within the regulated digital platforms under the Bill. Unless the Bill is appropriately expanded, the ACMA will have little ability to ensure harmful content disseminated through generative AI is appropriately regulated.

[1] The Code is available here.

[2] The consultation draft of the Bill is available here.

[3] See for example this media release from the Minister for Communications.

David Poddard

Dave Poddar

Partner

Quay Law Partners
Level 32, 180 George Street,
Sydney NSW 2000
T +61 422 800 415
E [email protected]
www.quaylaw.com

Angela Flannery

Angela Flannery

Partner

Quay Law Partners
Level 32, 180 George Street,
Sydney NSW 2000
T +61 419 489 093
E [email protected]
www.quaylaw.com