Australia: The relationship between AI and human rights
The Australian Human Rights Commission ('AHRC') released its Human Rights and Technology Final Report 20211 in March 2021. The 240-page Report made a total of 38 recommendations which focus on how Australia can seize the benefits of technological transformation, whilst upholding human rights as well as responding to ethical risks from new and emerging technologies. The Report considers how new technologies can impact an individual's right to equality, non-discrimination, privacy, and access. Katherine Sainty, Director of Sainty Law, provides an overview of the Report and the issues it raises concerning technologies and their potential impact on human rights.
The report tackles two key issues:
- The use of artificial intelligence ('AI') in an almost limitless range of decision making (including government services, recruitment, and the criminal justice system).
- Accessibility of digital communication technologies, particularly how people with disabilities navigate digitised goods, services, and facilities.
The AHRC is seeking modernisation of the current regulatory system to help facilitate innovative AI while ensuring decisions made with the use of AI are 'lawful, transparent, explainable, responsible, and subject to human oversight, review and intervention'.
What is AI?
AI is technology which uses some form of automation, machine learning, or algorithmic decision making. AI is significant because it can replace humans in the decision-making process and uses technology to make predictions, recommendations, and decisions based on a range of factors.
Where AI is used in criminal justice, advertising, recruitment, healthcare, policing, and social services, AI has the potential to cause harm and undermine individuals' right to privacy, equality, and non-discrimination. For example, the use of biometric technology and facial recognition has historically raised serious concerns about the impact of these forms of 'surveillance' on an individual's privacy. It is not always clear what factors influence the decisions made under AI and some AI made decisions are less accurate than alternative methods. Another concern posed by facial recognition technologies is the increased risk of racial profiling and high error rates for certain racial groups.
Addressing the risks
The AHRC has emphasised that existing legislation, such as the Privacy Act 1988 (No. 119, 1988) (as amended), which regulates the use of personal and sensitive information, has not proven effective against inappropriate use of facial and other biometric technology. Accordingly, the AHRC has emphasised that targeted legislation is required to prevent and address harm associated with the use of certain AI, such as with facial recognition and other biometric technology. Ultimately, if human rights standards cannot be met, legislation should prohibit the use of such technology. Until appropriate legislation is in effect, the AHRC recommends introducing a limited or partial moratorium on the use of facial recognition and other biometric technology in AI decision making, particularly in high-risk contexts such as policing and schools.
Human Rights by Design
The AHRC recommends applying a 'Human Rights by Design' approach to the development of AI informed decision-making systems. Human Rights by Design mirrors the similar design-led approach of Privacy by Design which ensures that privacy requirements are considered and built into a project from the outset. Similarly, Human Rights by Design can help flag potential human rights issues of AI early into the developmental stage of a project and allow developers to address the issue before it is deployed. The Human Rights by Design approach considers:
- Design and deliberation – design systems need to comply with international human rights law and engage in public consultation where AI poses a high risk to human rights.
- Testing – regular testing for human rights compliance before deployment.
- Oversight – establish an independent, external body to oversee human rights compliance of AI systems.
- Traceability, evidence, and proof – auditable AI systems that can be subject to meaningful review and demonstrate ongoing human rights compliance.
Although a Human Rights by Design approach sets an ideal of how AI technologies should be used going forward, without the implementation of new laws or regulations, it is hard to see how private and public sector entities can be compelled to develop AI systems in accordance with human rights by design.
AI Safety Commissioner
The AHRC has recommended the establishment of an AI Safety Commissioner ('the Commissioner') as an independent statutory office to provide technical expertise and capacity building on the development and use of AI. The Commissioner could work with regulators and policy makers to develop relevant and up to date rules and policies on the development and use of AI in both the private and public sector. By providing guidance on how to assess the human rights impact of AI decision making systems, the Commissioner would also help to encourage a new standard amongst AI developers which takes human rights into consideration.
AI can have real and serious implications on individual human rights, and due to a lack of transparency and oversight in its use there is often little recourse to challenge decisions made using AI. Australian laws and policies concerning AI decision making are lagging and it is in the Australian Government's best interest to seriously consider the recommendations made by the AHRC.
The consideration of human rights should not stand in the way of new and emerging technologies, rather it should help guide the design and development process. Incorporating processes such as Human Rights by Design can help achieve this.
Katherine Sainty Director
Sainty Law, Sydney