How a New Code of Practice Could Help Mitigate AI Risk
Table Of Contents:
The British government is betting big on AI. Given the state of public finances and a prolonged national productivity slump, in many ways, it has to. An ambitious AI Opportunities Action Plan announced in January has much to recommend. However, where there’s opportunity, there’s also risk.
That’s why it’s heartening to see the government also taking concrete steps to improve the security of the AI ecosystem. Announced just a fortnight after the action plan, a new “world-first” code of practice will help guide developers, system operators, and other stakeholders in building, deploying, maintaining, and monitoring their AI systems in accordance with security best practices. It’s hoped that the code will eventually form the basis of a new international ETSI standard.
Why We Need It
Although the technology is advancing rapidly, we already know some of the key threats AI poses to businesses if not designed securely. They include:
- Prompt injection attacks which allow malicious actors to bypass built-in safety guardrails in order to abuse large language models (LLMs) for nefarious ends. The dark web is reportedly already awash with “jailbreak-as-a-service” offerings that enable exactly this.
- Vulnerabilities and misconfigurations in AI system components, which could be exploited/used to steal or “poison” sensitive training data or models. Such security holes have already been discovered across the supply chain, including in vector databases, open-source components and LLM-hosting platforms.
- Denial of service, if an attacker feeds an unusually large input into an LLM
- Sensitive data disclosure (e.g. customer data) via the response to a user’s prompt, whether unintentional or malicious
The truth is that modern AI systems expand an already broad corporate cyber-attack surface, including APIs, LLM models, open source code, training datasets, front-end interfaces and cloud infrastructure. The risk of malfeasance grows as they become embedded into more business processes and applications. The OWASP Top 10 for LLM Applications is a great start. But it’s not an exhaustive list of AI-related security risks.
Who It’s For
The code of practice itself applies to a number of key stakeholders. These include:
- Software vendors offering AI services to customers
- Software vendors that use AI internally, whether it has been created in-house or by a third-party
- Regular organisations that create AI systems for in-house use
- Regular organisations that only use third-party AI components for in-house use
Only “AI vendors” that sell models or components but don’t actually develop or deploy them are likely to be outside the “scope ” of this code.
What’s In The Code
The code of practice is divided into 13 “principles” covering every stage of the AI lifecycle, across design, development, deployment, maintenance and end of life. They are:
- Raise awareness of AI security threats and risks by training cybersecurity staff and wider employee base.
- Design AI systems for security, functionality and performance based on thorough planning and risk assessments.
- Evaluate/model threats and manage risks related to the use of AI through security controls and continuous monitoring.
- Enable human responsibility and oversight for AI systems.
- Identify, track, and protect assets through a comprehensive inventory and tracking tools that account for interdependencies and connectivity.
- Secure infrastructure such as APIs, models, data, and training and processing pipelines. This should include a vulnerability disclosure policy and incident management/system recovery plans.
- Secure the software supply chain through risk assessments, mitigating controls and documentation.
- Document data, models and prompts with a clear audit trail of system design and post-deployment maintenance plans. Documentation will be needed to allay concerns over data poisoning when public training data is used.
- Conduct appropriate testing and evaluation covering all models, applications and systems prior to deployment via independent testers.
- Deploy securely by telling end users how their data will be used, accessed, and stored. Stakeholders should also provide security-relevant updates, guidance on management and configuration, and assistance to contain and mitigate the impact of any incident.
- Maintain regular security updates, patches and mitigations.
- Monitor system behaviour by logging system and user actions and detecting anomalies, security breaches or unexpected behaviour over time.
- Ensure proper data and model disposal when decommissioning or transferring ownership of a model or training data.
Putting the Building Blocks in Place
The good news is that best-practice cybersecurity standards can help organisations comply with the code. ISO 27001 is cited in the document itself, but David Cockcroft, information security sector manager at ISOQAR, claims that ISO 42001 is the best fit. This relatively new standard is designed to help organisations establish, implement, maintain, and continually improve an Artificial Intelligence Management System (AIMS).
“The fundamental links to ISO 42001 are evident from the outset. The audience and stakeholders within the code of practice are directly tied to the organisational context and the organisation’s role in the AI lifecycle found in the standard,” he tells ISMS.online.
“All principles within the code of practice can be mapped to the clauses and controls within the 42001 standard.”
Andreas Vermeulen, head of AI at Avantra, agrees.
“By integrating ISO 42001 with current security standards, organisations can improve compliance with the code, ensuring that AI-specific security and operational risks are adequately addressed, thus enhancing the overall security posture of AI,” he tells ISMS.online.
“The UK is setting a strong example by establishing comprehensive guidelines that ensure the safe and secure deployment of AI technologies. These efforts position it as a leader in the responsible development of AI systems.”
At ISMS.online, we’ve previously questioned whether the government’s ambitious AI plans could also inject cyber-risk into the infrastructure and applications needed to power its much-touted “decade of national renewal”. So it’s good to see some useful guidance published alongside often vague promises of tech-led growth.
“The new code of practice, which we have produced in collaboration with global partners, will not only help enhance the resilience of AI systems against malicious attacks but foster an environment in which UK AI innovation can thrive,” says NCSC technical director, Ollie Whitehouse.
As the code matures and evolves into an international standard, it could become the de facto baseline for AI security for some time to come.