Saudi Arabia: SDAIA publishes AI Ethics Principles version 2.0
On September 14, 2023, the Saudi Data & Artificial Intelligence Authority (SDAIA) published their Artificial Intelligence Ethics Framework version 2.0, focused on helping entities develop responsible artificial intelligence (AI) based solutions that limit the negative implications of AI systems while encouraging innovation. The document was published following a public consultation.
The framework applies to any entities (public, private, and non-profit) that design, develop, deploy, implement, use, or are affected by AI systems in Saudi Arabia, as well as researchers, workers, and consumers. To motivate adherence, the SDAIA may provide motivational badges to implementing agencies, which will reflect the level of compliance with AI ethics. In addition, annexed to the document is a list of AI ethics tools and checklists.
In detail, the framework:
- establishes the risk typology associated with the development and use of AI, divided into four types, from little to no risk, to unacceptable risk;
- asserts the necessity that risk management be directly connected to AI initiatives, in a way that standards, testing, and control are embedded into various stages of the AI System Lifecycle;
- defines the AI System Lifecycle as 'the cyclical process that AI projects follow;'
- enacts the seven principles that govern AI use and development in Saudi Arabia: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency and 'explainability,' and accountability and responsibility;
- provides steps to guide entities when applying the principles in each stage of the AI System Lifecycle; and
- identifies the roles and responsibilities of the SDAIA and adopting entities.
You can read the framework here.