brief
The Cybersecurity Agency (CSA) has just published its “Guidelines for the Security of Artificial Intelligence Systems” (“guide”) and the accompanying guidelines for protecting artificial intelligence systems (“Supporting Guide“).
The guidance advocates a “secure by design” and “secure by default” approach to address both existing cybersecurity threats and emerging risks, such as adversarial machine learning. The guidance is intended to provide system owners with principles for increasing awareness and implementing security controls throughout the AI lifecycle.
The guide is an open, collaborative resource that, while not mandatory, provides useful guidance on measures and controls based on industry best practices, academic insights, and resources such as the MITRE ATLAS database and the OWASP Top 10 for Machine Learning and Generative AI.
The CSA is currently seeking feedback on the guidelines and supporting guidance. Interested organizations must submit comments to Aisecurity@csa.gov.sg by 11:59pm on 15 September 2024.
Guidance for protecting artificial intelligence systems
The guidance supports protecting AI systems throughout their lifecycle, focusing on cybersecurity risks rather than AI safety, fairness, transparency, or misuse in cyberattacks. Organizations are encouraged to:
- Raise awareness and conduct risk assessments during the planning and design phases
- Securing the supply chain, selecting the right models, tracking and protecting AI assets, and securing the development environment during the development phase
- Ensure deployment infrastructure is secure, establish incident management procedures, and release responsibly during the deployment phase
- Monitor inputs and outputs, manage updates securely, and establish vulnerability disclosure processes during the operations and maintenance phases
- Properly dispose of data and models at the end of their lifecycle
Companion Guidance for Securing AI Systems
The Companion Guidance is a more detailed document designed to support system owners in implementing the Guidance and sets out practical controls that system owners may consider when adopting AI systems. For example, the Companion Guidance explains how organizations should:
- Start with a risk assessment
- Identify relevant measures/controls for each stage of the AI lifecycle in the checklist, covering planning and design, development, deployment, operation and maintenance, and retirement
The companion guide also provides detailed walkthroughs and implementation examples showing how to apply controls to AI systems.
The Guidelines and Companion Guidelines are welcome developments that highlight the CSA’s commitment to promoting the safety of AI systems in a collaborative and proactive manner. As Singapore continues to be at the forefront of technological innovation, resources such as the Guidelines and Companion Guidelines will continue to play an important role in building trust, ensuring that Singapore’s AI systems remain robust and resilient to vulnerabilities.
* * * * *
© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is a limited liability company and a member firm of Baker & McKenzie International, a global law firm with member firms located throughout the world. In accordance with common terminology used in professional services organizations, “client” means a partner or equivalent in such law firm. Similarly, “office” means an office of any such law firm. In some jurisdictions, this may be “attorney advertising” requiring notice. Prior results do not guarantee similar results.
Leave a Reply Cancel reply
You must be logged in to post a comment.