Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Germany: BSI publishes guide on generative AI models

On April 10, 2024, the Federal Office for Information Security (BSI) announced the publication of a guide titled 'Generative AI Models – Opportunities and Risks for Industry and Authorities.'

In particular, BSI explains that the Guide provides an overview of the opportunities and risks of large language models (LLMs) as a subset of generative artificial intelligence (AI) and suggests possible countermeasures to address these risks. The Guide is set to raise the security awareness of companies and authorities considering the integration of LLMs in their workflows and to promote their safe use.

Furthermore, the BSI highlights that the Guide is not intended to be exhaustive and is set to be continuously updated with the exploration of further subfields in generative AI, such as image or video generators.

The Guide outlines, among other things:

  • the target audience and groups of relevant persons, including developers, operators, users, and attackers;
  • the definition and opportunities of LLMs, including in the field of IT security such as detection of unwanted content, text processing, and analysis of data traffic;
  • the risks of LLMs:
    • in the context of proper use, such as automation bias, lack of quality, up-to-dateness, security of generated code, confidentiality, and dependency on the developer;
    • in the context of misuse, such as misinformation and reidentification of individuals from anonymized data, and placement of malware; and
    • in the context of attacks on LLMs, such as privacy attacks, evasion attacks, and poisoning attacks; and
  • the classification and reference of risks and countermeasures, such as management of training and evaluation data, protection of sensitive training data, and selection of model and operator.

You can read the press release here and the Guide here.

Feedback