May 2023

Antitrust and Artificial Intelligence in Australia: Using third party data

Antitrust has for many years being cognisant of possible antitrust issues arising from the use of algorithms, mainly around possible collusive behaviour in how these may narrow price competition, but also in relation to concerns regarding data collection and use, particularly given that the vast scale of data collected by the largest players to feed their algorithms creates barriers to entry and expansion that significantly dampen competition.

The use of artificial intelligence or AI, including in the area of large language models that power OpenAI’s chatbot, ChatGPT and similar chatbot products such as Google’s Bard, raises important competition issues that will likely result in regulatory intervention by governments around the world. While AI may be transformative, including in areas that have not been explored in great detail to date such as medicine, competition concerns in the areas which are currently in the spotlight may result in targeted regulatory intervention. For example, the extensive use of media content sourced from the internet to “power” Google’s Bard may lead the Australian Government to intervene in a similar manner to the Media Bargaining Code which was introduced in 2022, with the same aim of protecting traditional media companies that generate high quality news content and to protect both the competitive process and consumers.

ChatGPT, Bard and similar products hoover up data created by others, without attribution or recompense. “Answers” are provided to users in a context which makes fact checking all but impossible because original sources are not fully specified and links to those sources are not provided. This differs from the fact scenario the ACCC and the Australian Government faced in the context of the Media Bargaining Code. In that context, it was clear that Google and Facebook were using snippets of the news content of media companies – there, the origins of that content were easily discoverable.

What does remain the same in both cases is the underlying issue that the content of media companies is being used in a way that is both unattributed and uncompensated. This will likely lead to the same market failure that resulted in Rod Sims, the then ACCC Chair, urging the Government to implement the Media Bargaining Code regulation. That market failure is the potential that media companies will produce less high-quality news content as they are not compensated for its creation.

Of course, the widespread use of products such as ChatGPT and Bard also brings a broader concern around the potential for an explosion in the dissemination of misinformation and disinformation online.

This may occur in a relatively “innocent” manner. As the New Zealand Law Society recently noted, ChatGPT does have a remarkable potential to “hallucinate”. This has resulted in lawyers, who have taken the option of asking ChatGPT to do their work for them, asking the Law Society for fictious cases and precedents, as described here. A salutary warning to all of us that nothing beats the hard slog of doing your own work!

This would be another area where regulatory guard rails may well be needed given that the motives for the spread of misinformation and disinformation may be less innocent. However, implementing such guard rails would break new regulatory ground. Exactly what these guard rails should look like is not yet clear.

Australia and the ACCC were at the forefront in studying the global digital platforms with the Digital Platforms Inquiry that was completed in 2019 and their subsequent inquiries, including in relation to ad tech and also the ongoing Digital Platform Services Inquiry, which will not be completed until early 2025. Since 2017, the ACCC has shone a light on how Google and Facebook have monetised so called “free” services for which consumers pay with their data and how Google, Facebook and others have sold that data, including highly sensitive location data, through their advertising products. The ACCC was also successful in a misleading and deceptive conduct case against Google that clearly explained how Google misled consumers about the vast quantities of location data Google collects. That ground-breaking case subsequently led to similar cases in the United States.

Notwithstanding its leading role, at the current time, the ACCC does not appear to have focused on the competition (and consumer protection) issues that different uses of AI may create, including those highlighted in this article. However, media companies are already raising the alarm. News Corp has raised concerns in relation to the use of media content in large language models. Other regulators are also raising red flags, for example, the Italian data protection agency raised concerns in relation to privacy and in particular the storage and use of personal data that led to ChatGPT being temporarily banned in Italy.

Banning chatbots, or other uses of AI, is clearly not the answer. In this context, it is interesting to note that on 4 May 2023 the UK Competition and Markets Authority (CMA) commenced a review of competition and consumer protection considerations relating to certain AI issues, focussed on foundation models, which include large language models. In announcing the review, the CMA stated:

To ensure that innovation in AI continues in a way that benefits consumers, businesses and the UK economy, the government has asked regulators, including the Competition and Markets Authority (CMA), to think about how the innovative development and deployment of AI can be supported against five overarching principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Australia would be well served by the ACCC looking at similar issues here.

A version of this Insight was published in The Australian newspaper on 22 May 2023


David Poddard

Dave Poddar


Quay Law Partners
Level 32, 180 George Street,
Sydney NSW 2000
T +61 422 800 415
E [email protected]