Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Australia: Privacy concerns of large language AI

The use of artificial intelligence (AI) in Australia's tech landscape is growing rapidly, presenting unique and unprecedented challenges to businesses and consumers. Since late 2022, the use of a form of AI called large language models (LLM) has grown exponentially. The generative AI market is expected to continue to grow to a value of $20.6 billion by 2032.

Katherine Sainty and Ottilia Thomson, from Sainty Law, examine LLMs, the balance between their potential benefits for businesses, coinciding privacy concerns, and potential AI-specific legislative reform.

MF3d / Signature collection / istockphoto.com

What are LLMs?

LLMs are a branch of AI that employ deep learning algorithms to recognize, summarize, translate, and generate content based on extensive datasets. They are trained with large volumes of data, allowing them to derive relationships between different words. Once trained, LLMs generate quick responses to a range of prompts or questions, creating greater efficiency and productivity in the commercial landscape.

How are businesses using LLM AI?

LLMs are being incorporated into daily business practices and processes mainly in the IT sector, however, other sectors are following closely behind. For example, IT developers are using LLMs to write software and program robots, while scientists have been training LLMs to understand proteins, molecules, DNA, and RNA to learn about living organisms and their functions.

Law firms are using LLMs to answer simple questions, research, draft documents, and emails. LLMs assist lawyers to solve simple legal queries and examine legal risks quickly, allowing them to dedicate their attention on more complex legal matters which are more valuable to their clients.

Many people have questioned the cost of this newfound efficiency and innovation on privacy. A non-profit organization published an open letter, signed by over 1,000 AI experts, warning of the 'profound risks to society and humanity' AI poses. The letter calls for all AI training and development to pause for at least four months, arguing the rapid growth was problematic and as a society, we cannot keep up.

What are the privacy concerns of using LLMs?

There is a myriad of security, legal, and ethical issues arising from the rapid advances of LLMs, including privacy concerns. The amount of personal data used to train and prompt LLMs, coupled with their ability to aid cybercrime and lack of transparency, threatens consumer privacy.

Collection of personal data

Users are readily inputting personal or sensitive information into LLMs when they provide them with prompts and questions. Most LLMs use this information to not only answer their question but to further train the LLM for future responses. If the data recorded contains personal or confidential information about individuals or businesses, that personal information could be at risk of publication in the event of a data breach.

Businesses could unknowingly be putting their sensitive business information and clients' private information at risk when using LLMs. This could invite legal penalty, and puts client relationships and reputation at risk, particularly in the event of a data breach or cybercrime incident.

Cybercrime

LLMs can assist cybercriminals to create malware and phishing emails and aid other malicious activities. These behaviors have been coined 'AI-powered scams.' These scams use AI to impersonate individuals, manipulate data, and deceive businesses into providing personal information or transferring funds. For example, natural language processing (NLP) is an AI-powered scam that generates human-like text, allowing hackers to create convincing phishing emails, social media posts, or texts. The NLP algorithms may refer to an individual's recent transaction or incorporate their personal information to spur a sense of urgency from the victim, encouraging them to click on a malicious link or give up personal information.

A further example of AI-powered scams using LLMs is the creation of deepfakes, including fake images, videos, or audio recordings, which are indistinguishable from reality. Deepfakes can be used to obtain personal data to use for cybercrime or spread fake material to gain confidential information.

Increased instances of cybercrime will likely lead to increased breaches of privacy for consumers and businesses who fall victim to such scams.

Transparency

The transparency of a user's anonymity when using LLMs is also a key concern. It is unclear if a user's questions and prompts can be traced back to them, whether by the organization or LLM itself, and if so, whether this amounts to the sharing of personal information.

Some LLMs use a unique identifier with a login trail to identify the user and link them to the prompts or questions they have provided. This raises questions about how potentially sensitive data may be retained. It is one problem to have LLMs collecting user data to inform their deep learning, however, this problem is exacerbated when the amount of type of information being collected from user accounts remains unknown to them. This not only places individuals or businesses at risk of cyberattacks but removes all trust as consumers feel as though they have been scammed into the loss or misuse of their personal information.

While this technology is already widely adopted across different sectors, regulatory frameworks surrounding its use have not been established at law. Until these measures are implemented, this new technology remains unknown and dangerous territory for users and their privacy.

How does Australian law address these concerns?

AI-specific legislation does not exist in Australia. Instead, the use of AI is governed by existing legislation. For example, the Privacy Act 1988 (Cth) (the Act) applies in relation to any collection of personal information, and the Copyright Act 1968 (Cth) applies in relation to the use of intellectual property rights to train LLMs and use LLM-generated content. Moreover, while the Office of the Australian Information Commissioner has penalized AI companies in the past for breaching the Act, this is ultimately a reactive measure usually occurring substantially after privacy has been infringed.

The Australian Government has indicated a willingness to amend the current legislative landscape. For example, the Department of the Prime Minister and Cabinet released the Positioning Australia as a leader in digital economy regulation (automated decision making and AI regulation): issues paper, from March 2022, taking submissions on how the Government could direct their resources to facilitate and generate regulations on the responsible use of AI. In this Paper, the Department referred to the Australian Human Rights Commission's Human Rights and Technology Final Report (2021) which recommended establishing a dedicated AI safety commissioner.

The Paper also made reference to the recent Review of the Privacy Act Report. This Report recommended for greater regulations surrounding automated decision-making which require entities to include whether personal information will be used in automated decision-making for company legal decisions as well as clarity around how automated decision-making decisions are made in their privacy policies.

Another important development discussed in the Paper was Australia's AI Action Plan (2021) which created a strategy to ascertain Australia as a global leader in adopting trusted, secure, and responsible AI.

While the Federal Government has not released an official response to the submissions, action is needed to better regulate this area of the law, and movement is underway.

Potential legislative reform

To better combat the growing power of LLMs, Australia could implement significant reform to align with the action being taken on a global scale by other countries, such as in the EU, and China.

The EU is making significant headway, being set to finalize the first Legal Framework on AI, coined the EU AI Act, in 2023. This legislation will provide an intentionally broad definition of the term 'AI,' using technology-neutral language to ensure it remains futureproof. For example, AI systems that present an 'unacceptable risk' are expressly prohibited under this legislation. Selling or using these products can lead to fines of up to €30,000. AI that presents an 'unacceptable risk' is defined to include any AI which may cause psychological harm, exploit vulnerabilities of a specific group due to age or disability, or materially distort a person's behavior.

China has issued a Code of Ethics for New-generation Artificial Intelligence (only available in Chinese here). This Code aims to integrate ethics and morals into the life cycle and development of AI. The Code requires that AI development complies with fundamental ethical standards and:

  • improves human well-being;
  • promotes fairness and justice;
  • protects privacy and security;
  • ensures controllability and credibility;
  • strengthens responsibility; and
  • improves ethical literacy.

Best practices for Australian businesses – how should businesses use LLMs?

While legislative reform and frameworks remain on the horizon for Australia, there are measures that businesses should already be adopting to deal with LLMs.

Businesses should implement data redaction procedures when using LLMs. By redacting data and converting it into unintelligible forms, businesses can ensure that no sensitive data is collected by an LLM and then used to inform other prompted responses.

If feeding LLM's personally identifiable information (PII), businesses should seek to use synthetic PII replacements instead. Synthetic PII replaces personal information with contextually correct fake data, offering complete data security without compromising the commercial use or output utility of that data.

Businesses should consider using private LLMs instead of publicly available LLMs. Private programs protect the PII within text inputs before sharing this data with third parties. The terms of use of private programs vary, however, most only share necessary and synthetic information with other language models. These models allow businesses to leverage the utility and efficiency of LLMs without infringing on their customer's or businesses' privacy.

Finally, LLMs are making cybercrime more accessible to hackers and more common in the commercial landscape. As a result, businesses must consistently back up their data to mitigate the impact of any data breach and regularly check the adequacy of their data handling procedures.

Next steps

While there is no timeline on when we can expect legislative reform addressing AI in Australia, all signs point to early action.

The key battle for Australia will be finding a balance between the benefits AI such as LLMs bring to our workforce, and society's increasing privacy expectations.

In the meantime, businesses should remain vigilant when using LLMs. By taking small measures like using synthetic PII replacement, data redaction, or private LLMs, businesses can leverage LLMs without exacerbating the privacy concerns they raise.

Katherine Sainty Director                                                                                                                                                              [email protected]

Ottilia Thomson Graduate Lawyer                                                                                                                                                      [email protected]

Sainty Law, Sydney

Feedback