Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
Singapore: CSA requests comments on guidelines on securing AI Systems
On July 31, 2024, the Cyber Security Agency of Singapore (CSA) requested public comments on the draft Guidelines on Securing AI Systems and draft Companion Guide on Securing AI Systems.
In particular, the CSA highlighted that the draft Guidelines aim to ensure the secure use of artificial intelligence (AI) throughout its lifecycle, as AI is increasingly integrated into enterprise systems. The draft Guidelines clarify that they should be used alongside existing security best practices and requirements for IT environments. Further, the draft Guidelines provide that they do not address specific AI considerations, including its misuse through AI-enabled cyberattacks, misinformation, and the use of deepfakes for scams. The draft Guidelines separate the AI lifecycle into Planning and Design, Development, Deployment, Operations and Maintenance, and End of Life.
In terms of risks, the draft Guidelines consider both classic cybersecurity risks, such as supply chain attacks or unauthorized access, and adversarial machine learning (ML) techniques that influence ML models to produce inaccurate, biased, harmful, and/or confidential information through data poisoning, evasion attacks, or extraction attacks.
Cybersecurity recommendations
Centrally, the draft Guidelines provide that AI should be secure by design and secure by default.
During the planning and design stage, the draft Guidelines recommend that organizations conduct a risk assessment. However, the draft Guidelines note that organizations should conduct them more frequently throughout the AI lifecycle than for conventional systems, even if such assessments are based on existing governance and policies.
The draft Guidelines also outline considerations to take into account during the development phase of the AI lifecycle, such as the different AI models to use, training data, and use of sensitive data or intellectual property.
In the deployment of AI systems, the draft Guidelines stipulate that organizations must establish incident management procedures and deploy AI systems after having conducted appropriate security checks.
Throughout the operation of AI systems, organizations are recommended to monitor AI system outputs and behavior, establishing a vulnerability disclosure process for feedback on any findings of concern and anomalous behavior.
Finally, organizations are recommended to ensure proper data and model disposal in line with relevant standards and regulations on data destruction to prevent data breaches.
Draft Companion Guide
The draft Companion Guide elaborates on the recommendations included within the draft Guidelines. This includes the provision of examples and case studies on particular risks at different stages of the AI lifecycle with information on the level of risk and accompanying mitigation measures.
For example, during the operation of AI systems, organizations are recommended to monitor inputs to AI models and systems for possible attacks and suspicious activity, owing to risks including adversarial attacks and data exfiltration. Accordingly, organizations are recommended to:
- monitor and validate input prompts or queries for attempts to access, modify, or exfiltrate confidential information;
- log inputs and their confidentiality risk; and
- consider the use of classifiers to detect malicious inputs and log them for future review.
Public comments may be submitted to [email protected] until September 15, 2024.
You can read the press release here, the draft Guidelines here, and the draft Companion Guide here.