Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

UK: NCSC updates ML security principles

On May 22, 2024, the UK National Cyber Security Centre (NCSC) announced that it published an updated version of its Principles for the security of machine learning (originally published in August 2022).

What is the purpose of the machine learning (ML) security principles?

The principles aim at helping anyone developing, deploying, or operating a system with an ML component make informed decisions about the design, development, deployment, and operation of their ML systems. Furthermore, ML is described as a type of artificial intelligence (AI) by which computers find patterns in data or solve problems automatically without having to be explicitly programmed. According to the NCSC, almost all AI in current use is built using ML techniques.

However, the NCSC highlighted that the principles are not a comprehensive framework to grade a system of workflow and do not provide a checklist.

What are the ML security principles?

  • secure design: applies to the design stage of the ML systems and includes raising awareness of ML threats and risks, modeling the threats to the desired system, minimizing an adversary's knowledge, and analyzing vulnerabilities against inherent ML threats, which could be accomplished, according to the NCSC by implementing red teaming or by automated testing;
  • secure development: at the developmental stage of ML systems, the NCSC recommended securing the supply chain, securing the development infrastructure, managing the full life cycle of models and datasets, and choosing a model that maximizes security and performance;
  • secure deployment: this principle is focused on protecting the system from a range of attacks. For this purpose, the NSCS recommends protecting information that could be used to attack the model, and monitoring and logging user activity. The NCSC recommended implementing appropriate measures in accordance with the organization's security requirements, as well as implementing access controls, and security by default;
  • secure operation: this principle refers to the phase of continual learning (CL) of the ML system after deployment. This principle includes understanding and mitigating the risks of using CL, appropriately sanitizing inputs to the model in use, and developing incident and vulnerability management processes; and
  • end of life: for the decommissioning of an ML system, the NCSC recommended methods for decommissioning assets appropriately, using destruction or archiving methods, as well as collating lessons learned and sharing with the community.

The NCSC also provided a list of resources for further reading.

What is new?

The updates by the NCSC added:

  • risks to large language model (LLM) systems;
  • updates that reinforce the importance of supply chain security and life cycle management; and
  • more focus on 'security by design' (the idea that AI/ML tools, like any software system, should be developed in a way that treats security as a core business priority).

You can read the press release here and the ML security principles here.