Cyber Security in AI Governance
The new ISO/IEC 42001 AI governance standard marks advancement for AI management systems, however there are a few areas that could be improved:
1. No Direction for Adversarial Testing
The standard promotes generic risk assessments, but lacks structured methods for adversarial testing. There is no mention of Model Poisoning, Prompt Injection, or Exploit-based Testing, which are all crucial for AI assurance.
2. AI Supply Chain Exposure
Third-party risks are acknowledged, but fortified AI components, such as: compromised libraries or pre-trained models aren't addressed directly. This allows for the supply chain to be exploited.
3. Missing Red Team Simulation Guidance
The standard provides no framework for simulating malicious actor behaviour or running controlled offensive scenarios. This is important for assessing real-world AI resilience.
4. No Attack Surface Validation
There is no guidance on continuous offensive security testing against AI systems. Without consistent testing, vulnerabilities remain hidden until exploited by threat actors.
5. Ignoring Advancements in Social Engineering
There is a lack of acknowledgement on how AI systems have the potential to amplify social engineering attacks, by improving personalisation and automating disinformation.
6. Ethical Boundaries for Offensive Use Are Unclear
There is a lack of guidance on the ethical use of AI for offensive security purposes, leaving organisations without clear guidelines for responsible red teaming.
Cyber Threats are evolving - AI governance must evolve too.