Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
International: The UNESCO Recommendation on the Ethics of Artificial Intelligence
The rapid development of artificial intelligence (AI) has created much debate across the legal community about its risks and benefits. AI can increase productivity and efficiency, and create new opportunities for communities and businesses. However, AI use is not without challenges, including that it poses regulatory requirements for governments. In November 2021, the United Nations Educational, Scientific and Cultural Organisation (UNESCO) published its Recommendation on the Ethics of Artificial Intelligence (the Recommendation). Katherine Sainty, Katherine Voukidis, Kaelah Dowman, and Sarah Macken, from Sainty Law, take a look at what the Recommendation is and the 11 key policy areas that governments should consider when using AI.
What is the UNESCO Recommendation?
The Recommendation proposes a global framework of standards for the ethical use of AI to be adopted by UNESCO Member States. It examines ethical challenges that may arise, and how AI policies can be designed to ensure AI is used and developed in a way that benefits humanity, individuals, society, and the environment. The Recommendation has been adopted by 193 UNESCO Member States.
AI ethics: Foundational values and principles
The Recommendation supports AI policy development and legal reforms which reflect and are guided by four core values:
- 'respecting, protecting and promoting human rights and fundamental freedoms, and human dignity;
- environment and ecosystem flourishing;
- ensuring diversity and inclusiveness; and
- living in peaceful, just and interconnected societies.'
These values advance and reflect ten core human rights-based principles:
- AI use should be proportionate and do no harm;
- safety and security risks should be addressed, prevented, and eliminated so that safe AI facilitates sustainable and privacy-focused development and use;
- fairness and non-discrimination should be promoted, consistent with international law, ensuring AI benefits are available and accessible to all;
- sustainability objectives should be prioritized, including AI's human, social, cultural, economic, and environmental impact;
- privacy must be respected, protected, and promoted, for example, through data protection and governance mechanisms, and AI actors must be accountable for the systems they develop;
- human and public oversight cannot be ceded, and AI should not replace human responsibility and accountability;
- AI systems must be transparent and explainable so that people are fully informed on decisions made by AI and the reasons affecting those decisions;
- responsibility and accountability are key, and AI must be attributable to AI actors throughout each stage of the AI lifecycle;
- public awareness and AI literacy should be promoted through accessible education and public engagement, to ensure effective public participation; and
- AI regulation should be adaptable and collaborative and must respect international laws and stakeholder interests.
AI ethics: key policy areas
The Recommendation outlines 11 key policy areas that governments must consider to ensure AI use is ethical and respects human rights and freedoms. It suggests practical strategies based on the above values and principles, to responsibly develop, use and regulate AI globally.
1. Ethical impact assessments
States should introduce frameworks for ethical impact assessments to identify and address AI's benefits and risks, including any impact on human rights and freedoms, labor rights, the environment, and society. For example, assessing the socio-economic impact of AI to ensure its adoption does not increase the poverty gap or digital divide.
2. Ethical governance and stewardship
States should develop regulatory mechanisms that are inclusive, transparent, multidisciplinary, and multilateral. Harms should be investigated and redressed. AI policy must comply with human rights laws. This could be achieved by:
- establishing a certification process;
- requiring organizations to engage an independent AI Ethics Officer to oversee ethical impact assessments, auditing, and continuous monitoring;
- facilitating AI governance forums;
- supporting strategic research on safety and security risks, transparency, and explainability, inclusion, and literacy;
- introducing liability frameworks to ensure accountability;
- disclosing and combatting stereotypes in AI outcomes to ensure it does not foster cultural, economic, or social inequalities, prejudice, spread misinformation or disinformation, or disrupt freedom of expression or access to information;
- setting clear requirements for AI system transparency and explainability to ensure trustworthiness; and
- testing and developing laws by involving all AI actors, for example, through policy prototypes and regulatory sandboxes.
3. Data policy
States should encourage continuous evaluation of AI training data, including the data collection and selection process, data security and protection measures, and feedback mechanisms. Safeguards should be adopted to protect privacy and individual rights to personal and sensitive data.
4. Development and international cooperation
States and organizations should prioritize AI ethics, for example, by:
- including discussions on it in international, intergovernmental, and multi-stakeholder fora;
- adhering to the above values and principles;
- contributing expertise, funding, data, knowledge, and infrastructure to address AI development issues; and
- promoting AI ethics research and innovation.
5. Environment and ecosystems
States should assess and seek to reduce AI's environmental impact, including its carbon footprint energy consumption, and raw material extraction to support AI infrastructure. Incentives could be introduced to develop and adopt ethical AI solutions for disaster risk resilience, monitoring ecosystems, and protecting the planet. These should involve Indigenous communities, support the economy, and promote sustainable consumption and production patterns.
6. Gender
States should explore AI's potential to achieve gender equality and ensure its use does not impact the safety and integrity of women. They must ensure gender biases are not replicated in AI systems and seek to eliminate gender gaps. This might be achieved through increased representation such as more opportunities for women's participation in science, technology, engineering and mathematics (STEM), for example, through incentives to women, and policies promoting affirmative action and harassment-free environments.
7. Culture
States should use AI to preserve, enrich, understand, promote, and make accessible, cultural heritage such as human language and expression. AI could be used to bridge cultural gaps, increase human understanding, and mitigate the likelihood of languages disappearing. AI and IP should be researched to develop policies that assess how AI impacts IP owners, and protect their IP as well as how best to protect IP rights in works created using AI. Libraries and archives might also use AI to enhance their collections and improve access for users.
8. Education and research
States should collaborate with educational institutions and organizations to provide AI literacy education, including literacy, numeracy, coding, and ethical skills. They should facilitate research on ethical AI use in education to ensure AI empowers students and teachers without reducing cognitive abilities, compromising privacy, or allowing information to be misused.
9. Communication and information
States should use AI systems to improve access to information and knowledge. They should implement frameworks that encourage transparency in online communications, ensure individuals have access to diverse viewpoints, notify users if and why content has been removed or changed, and provide appeal mechanisms for users to seek redress. States should invest in digital and media literacy to mitigate disinformation, misinformation, and hate speech.
10. Economy and labor
States should assess and address the impact of AI on labor markets and consider introducing broader interdisciplinary skills in education programs to provide workers with a better opportunity to find jobs in a rapidly changing market. They should implement frameworks for a fair transition for at-risk workers, for example, upskilling and reskilling programs, mechanisms to retain employees and safety net programs. States should plan to ensure competitive markets and consumer protection, including to prevent market abuse by AI monopolies.
11. Health and social wellbeing
States should employ AI to improve health outcomes and protect the right to life, for example, by mitigating and managing disease outbreaks. They must be careful when using AI in health, including ensuring oversight to reduce bias, monitoring privacy risks, ensuring individuals provide informed consent, ensuring final decisions are made by humans, investing in ethical research committees, and developing guidelines for human-robot interactions.
Summary
The Recommendation establishes an important foundation to guide the development and use of AI. The above values and principles should underpin AI policy and practices to promote human rights and freedoms, peaceful and interconnected societies, diversity and inclusion, and flourishing environments. Businesses should be aware of the Recommendation as it will continue to inform AI policy development and implementation by governments in the near future.
Katherine Sainty Director
[email protected]
Katherine Voukidis Senior Associate
[email protected]
Kaelah Dowman Graduate Lawyer
[email protected]
Sarah Macken Paralegal
[email protected]
Sainty Law, Sydney