QUAY INSIGHTS
June 2023
Not quite the end of humanity: The Australian Government commences consultation on the appropriate governance framework for AI
On 1 June 2023 the Australian Minister for Industry and Science the Hon Ed Husic MP announced that the Government was commencing a consultation process to determine whether further regulation is required to ensure that the development and use of artificial intelligence (AI) in Australia is “safe and responsible”.
What do we mean by AI?
The discussion paper, “Safe and responsible AI in Australia”, released by the Government on 1 June 2023 (Discussion Paper) defines AI as:
an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming.
This of course is a very broad definition but is focussed on generative AI. The consultation process is intended to consider different applications of AI that are currently in use, such as in self driving cars and generative pre-trained transformers (more commonly known as GPT) as well as use in automated decision-making (ADM) that relies on AI. Any regulatory framework that is adopted must also take into consideration possible future uses of AI in a broader range of applications.
What is the Government worried about?
In looking at whether additional regulation is required, the Government is primarily worried about two issues:
- Australia may be left behind in the AI race. The Discussion Paper comments that AI is a critical technology in Australia’s national interest. Even though this is the case, investment in, and take up of, AI has been low in Australia.
- Harms may arise though the use of AI. For example, GPT may be used to exponentially expand the volume of mis- and dis-information disseminated to Australians. Algorithmic bias, including through ADM processes using AI that systematically advantage or disadvantage particular groups in society, is another form of harm that is of concern to the Government.
Any new regulation that is adopted by Government will need to balance these issues. While Australians require protections, regulation should be proportionate and clear so that innovation is not stymied by too great a degree of intervention or by regulation that is unclear. It would seem this is the reason why the Government has adopted the catchphrase “safe and responsible AI”. In other words, while use and development of AI applications is to be encouraged given, to quote the Minister for Industry and Science, “(t)he upside is massive”,1 at the same time it is essential that appropriate safeguards or guardrails are in place.
What regulation is likely?
Of course, the development and use of different types of AI applications must comply with existing laws. Many existing laws will apply to common issues that arise from AI use. For example, Australia’s Privacy Act 1988 must be complied with when personal information is used in the development of AI applications. The Office of the Australian Information Commissioner took action against Clearview AI, which had scraped the images of Australians from the web and used these to develop its AI facial recognition tool. In late 2021 the Australian Information Commissioner issued a determination finding that this was in breach of the Privacy Act and requiring not only that Clearview AI cease collecting images of individuals in Australia but that it destroy those images it had already collected from Australia.
Action may also be taken under the Australian Consumer Law in certain cases, for example, under the misleading and deceptive conduct prohibitions when algorithms do not operate as advertised. The Australian Competition & Consumer Commission has been successful in enforcement action in this area, for example, in its case against Trivago. There, Trivago’s algorithm did not, as advertised, always give users the best or cheapest hotel deals.
As well as listing many other existing laws that apply in the context of AI, the Discussion Paper points out that there are potential law reforms that will assist in regulation in this space, such as laws that will provide greater rights to the Australian Communications and Media Authority to combat mis- and dis-information.
There are voluntary guidelines that are applied in the governance of AI development, such as, at the federal level, the 2019 AI Ethics Framework which sets out eight key principles, including transparency, contestability and accountability and, at a State level, the NSW Government’s 2022 AI Assurance Framework.
Notwithstanding the extensive body of existing laws and voluntary guidelines and frameworks, the Government considers that there are likely to be gaps– primarily in the area of governance – that must be addressed. While the Discussion Paper flags that some form of voluntary guidelines or self- or co-regulation could be considered, it seems likely that compliance will be mandated by law.
The Government appears to be convinced that Australia cannot take its own path in the regulation of AI and must adopt a consistent approach to that in other jurisdictions, even though there is no international consensus on the most appropriate regulatory model. To date, as is starkly highlighted in the Discussion Paper, different countries have adopted a wide range of different approaches.
The model favoured in the Discussion Paper is based at least in part on the risk framework that the EU has adopted in its AI Act (and which underpins the approach in Canada and the US). This imposes different obligations depending on how risky the proposed application is. The risk tiers the Australian Government has asked for feedback on are:
- Low: These are AI applications, such as algorithm based spam filters, that have minor impacts that are limited, reversible or brief. Requirements would be light such as a requirement for training and limited internal monitoring.
- Medium: These would include chatbots that direct individuals to essential or emergency services, and are categorised as having high impacts that are ongoing and difficult to reverse. The requirements would be more onerous, including a requirement for self assessment and “meaningful” points of human involvement, reflecting the risk of the proposed use.
- High: This category is very high impacts which are systemic, irreversible or perpetual and would include use of AI-enabled robots for surgery. This category would be subject to the highest level of obligations, including the potential for external audit.
In practice, it may be difficult to implement such a model. For example, “AI-enabled chatbots that direct consumers to service options according to existing processes” is listed as an example of a low risk application. It does not take much imagination to envisage a circumstance where this is in fact very high risk, such as where the “service options” are for different government services or health care services and the “existing processes” are applied in a manner that is discriminatory against one or more cohorts of vulnerable individuals.
Australia is unlikely to adopt an additional category of risk that is included in the EU law. Under the AI Act uses which present an unacceptable risk, such as real time biometric identification, would be banned entirely. The Discussion Paper expresses a concern that banning some applications would (notwithstanding that this is expressly contemplated in the EU’s AI Act) be out of step with other jurisdictions and would also have the potential to inhibit innovation.
Any new regulation, as well as existing laws, must recognise that AI can deliver significant benefits such as has occurred in radiology where machine learning can bring benefits, but a review of the literature by radiologists cautions that AI use is complementary. Similarly the use of AI may be of complementary assistance in addressing occupational health and safety risks that are arising for Australian retail workers while at the same time be implemented in a way that embraces privacy principles to benefit all Australians.
A key issue missing from the consultation: copyright
One glaring omission from the Discussion Paper, and the consultation process, is a consideration of intellectual property, particularly copyright. The Discussion Paper states that copyright issues will be separately discussed at a “Ministerial Roundtable on Copyright” forum established by the Attorney-General. That roundtable has held one meeting at which copyright in AI was noted as a key issue.
Generative AI, which is the focus of the Government’s consultation, is created through significant data input. Large language models (LLMs) are trained on the basis of enormous volumes of text and multimodal foundation models (MFMs) are trained on equally voluminous quantities of data, not limited to text but also including for example images and speech. To date, the authors of the content that is used by these models have not been paid for that use and, realistically, there is no mechanism for this to occur under Australia’s Copyright Act 1968. However, resolving this issue is clearly critical to ensure that original content, including but not limited to public interest journalism, will continue to be produced.
The Discussion Paper notes that copyright will be discussed further at another meeting of the Roundtable in 2023. More proactive steps, not simply discussion, need to be taken in this area.
Australia was a leader in recognising the importance of copyright in the development of the News Media Bargaining Code in seeking to ensure large digital platforms compensate media organisations for the use of their journalism. These issues will be explored in future Quay Law Insights.
Timing
The Government is seeking submissions to the Discussion Paper up to 26 July 2023. It is too early to say when a response to the consultation may be released. However, the Government will need to move quickly if it wishes to position Australia as a global leader in responsible AI.
[1] Media release available here.
Contact
Dave Poddar
Partner
Quay Law Partners
Level 32, 180 George Street,
Sydney NSW 2000
T +61 422 800 415
E [email protected]
www.quaylaw.com
Angela Flannery
Partner
Quay Law Partners
Level 32, 180 George Street,
Sydney NSW 2000
T +61 419 489 093
E [email protected]
www.quaylaw.com