Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: OECD publishes report defining AI incidents and AI hazards

On May 6, 2024, the Organization for Economic Cooperation and Development (OECD) published a report entitled 'Defining AI Incidents and Related Terms.'

In particular, the report clarifies that it provides preliminary definitions and terminology related to artificial intelligence (AI) incidents, with such recommendations based on the OECD Expert Group on AI Incidents and the OECD Working Party on AI Governance.

What are AI incidents and AI disasters?

The report defines an AI incident as 'an event circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:

  • injury or harm to the health of a person or groups of people;
  • disruption of the management and operation of critical infrastructure;
  • violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
  • harm to property, communities or the environment.'

Whereas a serious AI incident is considered 'an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:

  • the death of a person or serious harm to the health of a person or groups of people;
  • a serious and irreversible disruption of the management and operation of critical infrastructure;
  • a serious violation of human rights or a serious breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
  • serious harm to property, communities or the environment.'

Building on AI incidents, the report defines an AI disaster as 'a serious AI incident that disrupts the functioning of a community or a society and that may test or exceed its capacity to cope, using its own resources. The effect of an AI disaster can be immediate and localized, or widespread and lasting for a long period of time.'

Notably, the report highlights that AI incidents could result in harm to individuals, groups, organizations, communities, society, and the environment. AI incidents may also occur before they are deployed, such as when an AI model is trained on proprietary information infringing copyright laws. Further, the use of an AI system includes harms arising from uses outside of its intended purposes and intentional or unintentional misuse.

Regarding the harms, the report provides that, among other things:

  • psychological and harms to mental health are included under the concept of health;
  • reputational harm to individuals and intangible harms such as hate speech are included in relation to fundamental rights; and
  • harms to democratic processes, such as election processes, are included under harm to communities.

On serious AI incidents, the report notes that the accumulation of smaller AI incidents can lead to a serious AI incident and that assessing the seriousness of a serious AI incident is context-dependent.

What is an AI hazard?

The report defines an AI hazard as 'an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident.' This includes any of the harms above.

A serious AI hazard is 'an event circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to a serious AI incident or AI disaster.'

Near misses are events that may lead to AI incidents and are included under the definition of an AI hazard. Hazards are also noted to include not only AI models, but elements of the design, training, and operating context of an AI system. Finally, as in the case of serious AI incidents, the accumulation of smaller AI hazards can lead to a serious AI hazard.

You can read the press release here and the report here.