Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Germany: BaFin publishes guidance for financial service providers on AI

On August 1, 2024, the Federal Financial Supervisory Authority (BaFin) published a guidance for financial service providers regarding artificial intelligence (AI). The guidance focuses on the fairness and bias of AI systems, includes a summary of the relevant provisions from the EU Artificial Intelligence Act (the EU AI Act), and clarifies how BaFin would address discriminatory practices.  

What are the key considerations under the guidance?

Notably, the guidance identifies that fairness encompasses three important aspects:

  • algorithmic fairness - the design of the algorithm should ensure that individuals and groups of people are treated equally. This is often done using quantitative methods, for example by comparing the proportion of positive credit decisions for women with the proportion of positive decisions for men. However, such a statistical approach, which is based on the characteristics of groups of people, is fundamentally not suitable for identifying individual discrimination. Therefore, further appropriate measures are required depending on the individual case;
  • the concept of discrimination under laws; and
  • bias - in the simplest case, bias refers to the dataset that serves as the basis for training or teaching the algorithm. In practice, it is possible that training datasets do not contain certain customer groups, for example, single women.

Fairness metrics and explainable AI 

According to BaFin, different definitions of fairness have been proposed in machine learning (ML) research and one approach used in practice is to use statistical measures to evaluate whether groups of people are evaluated equally. These measures are often referred to as fairness metrics and come in three main variants:

  • comparing predicted probabilities for different groups of people;
  • the comparison of predicted and actual results; and
  • comparing predicted probabilities and actual outcomes.

BaFin noted that the problem of insufficient fairness might be exacerbated when generative AI such as large language models (LLMs) are used.

BaFin recommendations

BaFin stated, among other things, that companies must adapt or supplement their governance processes with regard to AI/ML. In its conduct of business supervision, BaFin expects the supervised institutions and companies to clearly define responsibilities, raise awareness, and train employees entrusted with the development and use of AI/ML in order to mitigate risks. 

Furthermore, BaFin suggested that financial service providers avoid unjustified discrimination against customers through the use of AI/ML and set up review processes to identify possible sources of discrimination and take measures to eliminate them. Reliable and transparent data governance and data management are crucial to ensure fair and non-discriminatory treatment of consumers. In addition, human oversight may be required to ensure responsible operations, compensate for technical deficiencies, and close data gaps. Furthermore, companies can play a key role in promoting transparency through their choice of model.

Finally, BaFin noted that if the use of AI/ML leads to discrimination prohibited by law, it will take appropriate measures, for example within the framework of malpractice supervision.

You can read the press release, only available in German, here.