AIGN-01

Does your solution have an AI risk model when developing or implementing your solution's AI model?

Explanation

This question is asking whether your AI solution incorporates a formal risk management framework or methodology specifically designed for AI systems during development or implementation. An AI risk model is a structured approach to identifying, assessing, and mitigating risks associated with artificial intelligence systems. These frameworks help organizations systematically address potential harms, biases, security vulnerabilities, and other risks that are unique to AI systems. The question is being asked in a security assessment because AI systems introduce novel risks beyond traditional software, including: 1. Data poisoning or model manipulation 2. Adversarial attacks that can trick AI systems 3. Privacy concerns with training data 4. Bias and fairness issues 5. Explainability and transparency challenges 6. Potential for misuse or unintended consequences The guidance mentions several established frameworks: - AI RMF (NIST AI Risk Management Framework) - OWASP Top 10 for Large Language Model Applications - RAFT (Responsible AI Framework for Transparency) - MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) To best answer this question: 1. Identify which formal AI risk frameworks your organization uses 2. Describe how these frameworks are integrated into your development lifecycle 3. Provide specific examples of how you identify and mitigate AI-specific risks 4. If you use a custom framework, explain how it addresses the key areas covered by standard frameworks 5. Be honest if you don't currently use a formal AI risk model, but describe any alternative approaches you take to manage AI risks

Guidance

Examples include AI RMF, OWASP Top 10, RAFT, MITRE ATLAS.

Example Responses

Example Response 1

Yes, our solution incorporates the NIST AI Risk Management Framework (AI RMF) throughout our AI development lifecycle We have implemented all four core functions: Govern, Map, Measure, and Manage During the development phase, we conduct regular risk assessments using the NIST AI RMF playbook to identify potential vulnerabilities and biases We maintain a comprehensive AI risk register that tracks identified risks, their potential impacts, mitigation strategies, and verification measures Additionally, we supplement the NIST framework with elements from OWASP's Top 10 for LLM Applications to address specific security vulnerabilities in our language models Our AI governance committee reviews all risk assessments quarterly and approves deployment only after verifying that appropriate controls are in place.

Example Response 2

Yes, we have implemented a hybrid risk model that combines elements from MITRE ATLAS and our own proprietary risk framework Our approach focuses on three key areas: (1) Security - we use MITRE ATLAS to identify potential adversarial attacks against our computer vision models and implement appropriate defenses; (2) Fairness & Bias - we conduct regular bias audits using standardized metrics across demographic groups; and (3) Transparency - we maintain detailed documentation of model limitations and confidence levels Each AI project undergoes a mandatory risk assessment at four stages: planning, development, pre-deployment, and post-deployment monitoring We've established threshold criteria for each risk category that must be met before advancing to the next development stage Our AI Ethics Board provides oversight and final approval before any model enters production.

Example Response 3

No, we currently do not have a formal AI risk model in place for our solution While we perform standard software security testing and code reviews, we have not yet implemented an AI-specific risk framework like AI RMF, OWASP Top 10 for LLMs, or MITRE ATLAS We recognize this as a gap in our security posture and have initiated a project to adopt the NIST AI Risk Management Framework within the next quarter In the interim, we are mitigating risks through manual reviews of training data, regular testing for obvious biases, and limiting the deployment scope of our AI features We have also engaged an external consultant to help us establish appropriate AI governance processes and risk assessment methodologies.

Context

Tab
AI
Category
General AI Questions

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron