Do you plan for and mitigate supply-chain risk related to your AI features?
Explanation
Guidance
Looking for SAST (Static Application Security Testing) and SBOM (Software Bill of Materials) attestations.
Example Responses
Example Response 1
Yes, we implement a comprehensive supply-chain risk management program for our AI features We maintain a detailed Software Bill of Materials (SBOM) for all AI systems that documents every component, including open-source libraries, pre-trained models, and datasets with their sources and versions Our security team uses Snyk and WhiteSource for continuous monitoring of vulnerabilities in these dependencies We conduct Static Application Security Testing (SAST) using SonarQube and Checkmarx on all AI code, including model training pipelines For third-party models, we perform security assessments before integration, including adversarial testing and privacy analysis We've established a formal vendor risk assessment process for any external AI services Our AI governance committee reviews supply chain risks quarterly, and we have documented procedures for rapid response to critical vulnerabilities in AI components.
Example Response 2
Yes, we mitigate AI supply-chain risk through several measures We use GitHub Advanced Security with CodeQL for SAST on all AI code repositories, with custom rules specific to machine learning vulnerabilities Our DevSecOps pipeline generates and maintains SBOMs for all AI systems using CycloneDX format, which are reviewed monthly for security issues For third-party models and datasets, we have a formal evaluation process that includes provenance verification, bias testing, and security review before approval We've implemented a zero-trust approach for our AI infrastructure, with strict access controls and continuous monitoring Our data science and security teams collaborate on quarterly risk assessments of our AI supply chain, and we maintain an incident response playbook specific to AI security incidents We also actively participate in the AI security community to stay informed about emerging threats.
Example Response 3
We currently do not have a formal program to mitigate supply-chain risk for our AI features Our AI development is relatively new, and we're using standard open-source libraries and pre-trained models without specific security vetting beyond what our general application security program provides We don't currently generate SBOMs specifically for our AI components, though we do track major dependencies manually We run basic SAST tools on our codebase but don't have AI-specific security testing in place We recognize this is a gap in our security program and are planning to implement AI-specific supply chain risk management in the next quarter, including formal SBOM generation and specialized security testing for AI components In the interim, we're mitigating risk by limiting our AI features to non-critical functions and performing manual code reviews of AI implementations.
Context
- Tab
- AI
- Category
- AI Data Security

