Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Australia: Digital Platform Regulators forum joint submission on AI

The development and use of artificial intelligence (AI) is growing at unprecedented rates globally, with mounting pressure on Australian regulators to establish adequate frameworks to govern its use in Australia. While AI has the capacity to provide many benefits, the potential risks associated with its use and rapid growth must be considered by regulators.

In September 2023, the Digital Platform Regulators forum1 (DP-REG) published its joint submission (the Submission) in response to a Department of Industry, Science and Resources (DISR) consultation for their discussion paper, 'Supporting Responsible AI in Australia' (the Discussion Paper). Katherine Sainty, Lily O Brien, Kaelah Dowman, and Sarah Macken, from Sainty Law, discuss the contents of both the Submission and the Discussion Paper and some of the key benefits and risks of AI.

dem10 / Signature collection / istockphoto.com

The Discussion Paper concerns governance mechanisms that may be adopted to ensure AI is developed and used responsibly in Australia. The Submission considers the challenges of advances in AI and emphasizes a regulatory approach centered on collaboration between industry bodies, to harness expertise and devise strategies to strengthen existing frameworks and safeguard individuals, without diminishing the benefits and vast opportunities of AI.

The Discussion Paper

The Discussion Paper considers the potential benefits of AI to the economy and society, and the risks AI may pose given its potential applications and the speed at which it is being developed. This rapid innovation raises concerns such as uncertainty of the implications of AI, and the need to have appropriate regulations in place to cater to AI's unique operation and outputs.

The Discussion Paper recognizes existing Australian laws governing AI, examines approaches considered or taken in other jurisdictions, and considers different regulatory approaches that might be applied in Australia to mitigate the potential risks of AI and support safe and responsible AI practices.

The Discussion Paper highlights that while global investment in AI is increasing, there are low adoption rates across Australia, which can partly be explained by lack of public trust and confidence in AI. It suggests that a regulatory approach that allows the benefits of AI to be balanced against appropriate safeguards to mitigate the risks of use will facilitate a thriving and innovative digital environment.

The Submission

The Submission considers the competing benefits and risks of AI, current regulatory frameworks that govern digital platforms, and endorses an approach to regulating AI that builds on existing frameworks to optimize the safe use of AI technologies in Australia. It highlights the importance of maintaining collaboration between DP-REG members, other arms of government, and stakeholders, to benefit from each other's strengths and expertise and develop subsequent reforms with a broad understanding of the impact of AI in the digital economy.

The Submission considers five main areas affected by the development and use of AI technologies: consumer protection, competition, media and the information environment, privacy, and online safety. It examines where existing legal responses to harms in these areas are sufficient, and where there is potential for enhanced protections to better account for the role of AI.

Benefits of AI in digital platforms

The Submission acknowledges AI is a valuable resource that can facilitate information sharing, promote innovation, and stimulate competition by providing a greater variety of goods and services at lower prices and higher quality, improving and creating value for consumers. It can assist the development of new products and services, enhance product safety features, and detect potential safety issues. This may also encourage productivity growth.

The Submission outlines how AI may be used to efficiently identify and remove harmful online material and assist in identifying and disrupting online scams. It suggests that generative AI can be used in media for tasks such as generating ideas for articles, summarizing large data sets, highlighting errors, and reducing time spent on administrative tasks.

Costs of AI

AI may be exploited to further individual interests that are not aligned with public interests. The Submission considers current and emerging risks that will grow as AI becomes more sophisticated and widely used. It contends that AI will likely exacerbate existing risks for digital platforms that the regulators are already working to address and that these risks may lead to poor consumer experiences and erode consumer trust in the digital economy.

Key risks include:

  • using AI for increasingly sophisticated and disseminated online scams and fake reviews, creating harmful applications, and raising new product safety risks;
  • privacy implications for consumer data that can be exploited using AI; and
  • using AI to spread misinformation, while appearing to the public as reliable and trustworthy information, altering perceptions of media credibility and diminishing consumer trust.

AI's capacity to create and spread misinformation poses risks to Australians' online safety. The creation of highly realistic synthetic imagery, deepfake videos, and large amounts of seemingly authentic content can be used to bully, abuse, or manipulate individuals. Content may be created about individuals using minimal information and without the individual's consent.

The potential for AI to improve competition is also offset by the risk of businesses restricting access to valuable data and IP, which can distort competition and build barriers to entry. Algorithms may be designed to distort the market and leverage profits at the expense of consumers. AI may also assist firms in undetected collusion.

Application of the existing regulatory frameworks to AI

The Submission identifies that existing laws can apply and be used to tackle some AI issues. However, the increasing sophistication of AI means any legal response must be augmented to better climatize to the complexities of AI and the risks it enables. Key regulations included in the Submission are outlined below.

Competition and consumer law

The Australian Competition and Consumer Commission (ACCC) plays a key role in consumer protection and enforcing the Australian Consumer Law (ACL), with a focus on protection from misleading and deceptive conduct, unconscionable conduct, unfair contract terms, unsafe products, and promoting fair trading. The Submission notes that while these laws may extend to some AI risks such as prohibiting misleading and deceptive conduct, AI may make it harder for these crimes to be prevented, detected, or pursued, reducing the practical effectiveness of existing laws. For example, current laws do not prohibit some algorithmic collusion that AI enables, and there are limited laws dealing with scams on digital platforms.

The ACCC recommends that the ACL more clearly sets out the application of laws to digital products. The ACCC's September 2022 Digital Platforms Services Inquiry interim report concludes that existing laws inadequately address online harms and recommends an economy-wide prohibition on unfair trading practices.

The Submission notes the voluntary Australian Code of Practice on Disinformation and Misinformation (the Code) which is used to regulate practices in digital media and aims to provide transparency of information and its sources to the public. The Code helps to ensure consumers can be confident that publicly disseminated information is accurate and trustworthy. While this provides general regulation for the use of AI in media, it has limited enforceability, so the Submission suggests that the Code be expanded to better regulate and specifically include AI to prevent its misuse.

Privacy law

The Privacy Act 1988 (Cth) (the Privacy Act) and the Australian Privacy Principles (APPs) apply to personal information used by organizations, including in training, testing, or using AI. Upholding privacy protections is essential to consumer protection and maintaining public trust and support in the use of AI. The Submission highlights that the principle-based and technology-neutral approach adopted by the Privacy Act and APPs is advantageous to regulating AI as it provides a 'one size fits all' approach that can be adapted and altered depending on the organization's operations, risk profile, and their consumers' needs.

eSafety

The Online Safety Act 2021 (Cth) is enforced by the eSafety Commissioner to mitigate risks to online safety, with the aim of enhancing transparency and accountability of online bodies and protecting consumers in digital platforms. These regulations extend to apply to harms caused by AI. In June 2023, the eSafety Commissioner registered five new industry codes, set to take effect on December 16, 2023. These codes require online service providers to take reasonable steps to reduce the availability of illegal or seriously harmful content on their sites.

Next steps

The Submission outlines how existing regulatory frameworks may be used to regulate AI and identifies inadequacies and areas for improvement. The DP-REG supports navigating AI regulation by enhancing existing frameworks through collaboration and generating an understanding of AI's role across industries, with an aim to mitigate AI risks without hampering AI benefits to the digital economy.

With an ever-changing digital landscape, it is important for businesses and consumers to be aware of the opportunities and risks of using AI so that they can prepare and protect themselves.

Katherine Sainty Director
[email protected]
Lily O Brien Senior Paralegal
[email protected]
Kaelah Dowman Graduate Lawyer
[email protected]
Sarah Macken Junior Paralegal
[email protected]
Sainty Law, Sydney


1. The DP-REG is an initiative between the Australian independent regulators, the Australian Competition and Consumer Commission, the Australian Communications and Media Authority, the eSafety Commissioner, and the Office of the Australian Information Commissioner. These regulators share information and collaborate on prominent regulatory issues affecting the digital landscape with the goal of establishing and maintaining a safe, trusted, fair, innovative, and competitive space for Australia's digital economy and workplace. One of the strategic priorities for the DP-REG in 2023-24 is understanding and assessing the benefits, risks, and harms of generative AI.