AISC-05

Do you plan for and mitigate supply-chain risk related to your AI features?

Explanation

This question is asking about how your organization identifies and manages risks in the AI supply chain - the components, data sources, and dependencies that make up your AI systems. Supply-chain risk in AI refers to vulnerabilities that could be introduced through third-party components, pre-trained models, datasets, or libraries used in your AI systems. These risks could include security vulnerabilities, data poisoning, backdoors in models, or compromised dependencies. The guidance specifically mentions two technical approaches: 1. SAST (Static Application Security Testing): This involves analyzing source code or binaries for security vulnerabilities without executing the program. For AI systems, this would include scanning AI model code and dependencies. 2. SBOM (Software Bill of Materials): This is an inventory of all components in your software, including their versions and relationships. For AI systems, this would document all libraries, frameworks, pre-trained models, and datasets used. This question is being asked because AI systems often rely heavily on open-source components and pre-trained models that could introduce security vulnerabilities. Assessors want to know you have visibility into what makes up your AI systems and processes to identify and mitigate risks. To best answer this question, describe your specific processes for: - Maintaining inventories of AI components and dependencies - Vetting third-party AI components before use - Continuously monitoring for vulnerabilities in AI dependencies - Testing AI code and models for security issues - Responding to discovered vulnerabilities Include specific tools, processes, and governance structures you have in place.

Guidance

Looking for SAST (Static Application Security Testing) and SBOM (Software Bill of Materials) attestations.

Example Responses

Example Response 1

Yes, we implement a comprehensive supply-chain risk management program for our AI features We maintain a detailed Software Bill of Materials (SBOM) for all AI systems that documents every component, including open-source libraries, pre-trained models, and datasets with their sources and versions Our security team uses Snyk and WhiteSource for continuous monitoring of vulnerabilities in these dependencies We conduct Static Application Security Testing (SAST) using SonarQube and Checkmarx on all AI code, including model training pipelines For third-party models, we perform security assessments before integration, including adversarial testing and privacy analysis We've established a formal vendor risk assessment process for any external AI services Our AI governance committee reviews supply chain risks quarterly, and we have documented procedures for rapid response to critical vulnerabilities in AI components.

Example Response 2

Yes, we mitigate AI supply-chain risk through several measures We use GitHub Advanced Security with CodeQL for SAST on all AI code repositories, with custom rules specific to machine learning vulnerabilities Our DevSecOps pipeline generates and maintains SBOMs for all AI systems using CycloneDX format, which are reviewed monthly for security issues For third-party models and datasets, we have a formal evaluation process that includes provenance verification, bias testing, and security review before approval We've implemented a zero-trust approach for our AI infrastructure, with strict access controls and continuous monitoring Our data science and security teams collaborate on quarterly risk assessments of our AI supply chain, and we maintain an incident response playbook specific to AI security incidents We also actively participate in the AI security community to stay informed about emerging threats.

Example Response 3

We currently do not have a formal program to mitigate supply-chain risk for our AI features Our AI development is relatively new, and we're using standard open-source libraries and pre-trained models without specific security vetting beyond what our general application security program provides We don't currently generate SBOMs specifically for our AI components, though we do track major dependencies manually We run basic SAST tools on our codebase but don't have AI-specific security testing in place We recognize this is a gap in our security program and are planning to implement AI-specific supply chain risk management in the next quarter, including formal SBOM generation and specialized security testing for AI components In the interim, we're mitigating risk by limiting our AI features to non-critical functions and performing manual code reviews of AI implementations.

Context

Tab
AI
Category
AI Data Security

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron