Germany: Use of AI in law enforcement and the BfDI's consultation procedure
Artificial intelligence ('AI') is driving progress and prosperity in many areas of life. In this context, innovation through AI can also be beneficial in the area of state administration, for example in digitising official procedures through e-governance. While the benefits of new technologies are obvious in some areas, the use of AI in law enforcement is highly sensitive and heavily debated. Dr. Carlo Piltz, Partner at Piltz Legal, provides an overview of the debate on the use of AI in law enforcement, as well as the Federal Commissioner for Data Protection and Freedom of Information's ('BfDI') public consultation1 process on the same.
The reason for debate is obvious: the use and misuse of AI in law enforcement can lead to serious consequences, raising a number of legal, ethical, and socio-political issues. The Recommendations of the Data Ethics Commission for the Federal Government's Strategy on Artificial Intelligence2, of 09 October 2018, have already pointed out important principles and frameworks. First of all, the essential benchmark for a responsible approach to AI is the Basic Law for the Federal Republic of Germany ('the Constitution'), including the fundamental rights and principles of the rule of law and the social state, as well as the principle of democracy.
The BfDI has now launched its public consultation process, and from the BfDI's point of view, this sensitive topic requires more public debate so that the constitutional requirements for the use of AI for law enforcement purposes can be further specified.
To introduce such a debate, the BfDI has prepared a consultation paper with seven basic theses. These will be made available for public discussion until 18 November 2021.
The possibilities for processing personal data with new technical methods are extensive and will become even more relevant in the future. On the basis of these new methods, there are countless ways to influence the lives and decisions of individuals. However, data can not only lead to investigative successes, but also to undue suspicion, thus permanently changing or even destroying a person's life and reputation. For this reason, according to the BfDI, legislative activities should always be accompanied by a calm, open-ended, and careful political and social discussion to clarify the impact of these technologies on citizens' freedoms on the one hand, and to establish the necessity of their use for law enforcement and security purposes on the other. In the process, the risks must be comprehensively compared with the benefits, and any discrimination and supra-individual consequences must be effectively ruled out, both for specific groups of people and for democratic processes, and the rule of law as a whole.
The use of AI for law enforcement and security can also have considerable relevance to personality. For example, by analysing personal data with the help of AI, it will be possible in the future to predict where and what kind of crimes are likely to occur. In addition, police deployment scenarios are conceivable, in which separate data sets are linked with each other and patterns in data sets are recognised. Thinking further, AI-supported recognition of a person's emotional state is also possible. According to established case law, interference with the right to informational self-determination requires a legal basis that sufficiently limits the use of data to specific purposes. The higher the intensity of the interference, the higher the requirements for the specificity of the necessary regulation. According to the BfDI, numerous examples demonstrate serious risks and discriminatory effects on entire groups of people, depending on the content and quality of the data sets used.
Data subjects' rights, such as the right of every person to demand from the controller information about the processing of data concerning him/her or the deletion of the data, are of particular importance in data protection law. According to the BfDI, however, the technical potential of AI development should not be realised at the expense of data subjects' rights. For example, the use of AI must not in any way diminish the effective exercise of data subjects' rights, and it is imperative that general data protection principles are observed. A social and legal order in which citizens can no longer figure out who knows what, when, and on what occasion about them would therefore be incompatible with the right to informational self-determination.
Personal data may only be processed with AI if this leads to useful results. The reason for this is the quality of the training data used. There are serious risks and discriminatory effects for entire groups of people, which are dependent on the quality of the data sets used. Accordingly, effective quality controls must be ensured as early as the training phase, and traceability of the information obtained must be guaranteed at all times. Only if this information is provided in a comprehensible manner can the risks of the application be assessed.
The use of certain AI methods in the field of law enforcement has the potential to make data subjects the object of police data processing. This includes, for example, methods that record human emotions and draw conclusions from them for further proceedings. According to the BfDI, investigative measures that lead to the 'screening' of individuals are not compatible with the Constitution. Accordingly, the core area of private life or the guarantee of human dignity should not be affected, which would clearly be the case with the recording of human emotions.
It must be possible for AI-supported data processing to be comprehensively audited by data protection supervisory authorities. Accordingly, all existing mechanisms would have to be strictly comprehensible. According to the BfDI, the evaluation must also clarify how this can be achieved before legal bases are created.
Finally, the BfDI states that the processing of personal data by means of AI results in a high risk to the rights and freedoms of natural persons, so that a Data Protection Impact Assessment must be carried out prior to the use of AI pursuant to Article 27(1) of the Data Protection Directive with respect to Law Enforcement (Directive (EU) 2016/680) ('Law Enforcement Directive'). However, this does not release the legislator from the obligation to conduct a general impact assessment prior to the adoption of the corresponding legal basis.
As a result, the use of AI is, in principle, also possible for the police and the judiciary and could provide relief and improve the quality of work of those authorities. However, the use of AI does not seem viable without a specific legal regulation. The amount of personal data that would be required, for example, for a single usable investigative scenario appears enormous. This would be accompanied by an enormous interference with the personal rights of the persons concerned, which, according to established case law, cannot be compensated for by the existing legal basis.
In addition, when using AI systems with regard to the fulfilment of data subjects' rights, it would need to be ensured that sufficient documentation of the processed data and the principle of transparency are guaranteed, since the safeguarding of the data protection principles required by constitutional law and laid down in Article 4 of the Law Enforcement Directive are explicitly non-negotiable. The processing of personal data without informing data subjects or disclosing the processing to them is not advisable.
Insofar as data is processed by AI systems, comprehensible monitoring by the supervisory authorities must be made possible at all times. The form in which this is ensured should be left to the individual bodies themselves. However, it is advisable to have already implemented a strategy to enable control mechanisms before the start of the AI system in order to be able to report on the data processing already in the test phase. In this context, it would be helpful to suggest, for example, the extent to which logging is expected by the supervisory authorities to ensure that data processing is traceable.
The consultation procedure launched by the BfDI deserves acknowledgement. The data protection consequences of AI are likely to become particularly relevant in the future, also, but not only, in the area of law enforcement and security. It is positive that the authority is not ignoring this but is actively engaging in the debate. Although the consultation initiative is 'only' directed at the area of law enforcement and public security, the discussion and the final position of the authority should also be relevant for private companies within the scope of the GDPR.
Dr. Carlo Piltz Partner
Piltz Legal, Berlin
1. See: https://www.bmi.bund.de/SharedDocs/downloads/EN/themen/it-digital-policy/recommendations-data-ethics-commission.pdf?__blob=publicationFile&v=3
2. See: https://www.bfdi.bund.de/SharedDocs/Pressemitteilungen/DE/2021/15_Konsultationsverfahren-KI-Start.html?nn=252136 (only available in German)