AIPL-02

Have you identified and measured AI risks?

Explanation

This question is asking whether your organization has formally identified potential risks associated with your AI systems and established methods to measure or quantify these risks. In a security assessment context, this question aims to determine if you have a structured approach to AI risk management. As AI systems become more prevalent, they introduce unique security, privacy, ethical, and operational risks that differ from traditional software. These risks might include data poisoning, model inversion attacks, algorithmic bias, or unintended consequences of AI decisions. The assessor wants to know if you've gone beyond just acknowledging that AI risks exist to actually documenting specific risks relevant to your AI systems and implementing quantifiable metrics to track and manage these risks over time. This demonstrates maturity in your AI governance approach. To best answer this question, you should: 1. Describe your formal AI risk assessment process 2. Mention specific AI risks you've identified 3. Explain how you measure these risks (metrics, thresholds, etc.) 4. Reference any frameworks or standards you follow (NIST AI RMF, ISO, etc.) 5. Indicate the frequency of risk assessments 6. Mention documentation that evidences this work

Guidance

Looking for documentation and policies around measuring AI risk.

Example Responses

Example Response 1

Yes, we have implemented a comprehensive AI risk management framework based on NIST AI RMF Our AI governance committee conducts quarterly risk assessments for all production AI systems We've identified and categorized risks including model drift, data poisoning vulnerabilities, algorithmic bias, and explainability challenges specific to each model Each risk is measured using defined metrics - for example, we measure bias using statistical parity difference and equal opportunity difference across protected attributes, with thresholds set at <5% variance For security risks, we conduct adversarial testing quarterly with documented attack vectors and success rates All risk assessments are documented in our AI Risk Register, which tracks risk levels, mitigations, and trends over time We can provide our AI Risk Assessment Methodology document and a redacted risk register as evidence.

Example Response 2

Yes, our organization has established an AI Risk Management Program as of Q2 2023 We've conducted an initial risk assessment for our three AI applications using Microsoft's Responsible AI Assessment framework Key identified risks include: 1) Data privacy concerns with our recommendation engine (measured via PII exposure rate), 2) Potential for unfair outcomes in our HR screening tool (measured via demographic performance disparity), and 3) Explainability gaps in our financial decision support system (measured via explanation satisfaction rating from users) Each risk has assigned metrics with quarterly measurement cycles and defined thresholds that trigger remediation actions Our AI Ethics Committee reviews these measurements monthly, and we maintain a centralized AI Risk Dashboard While our program is relatively new, we've completed baseline measurements for all identified risks and established improvement targets for the next 12 months.

Example Response 3

No, we have not yet formally identified and measured AI risks While we recognize the importance of AI risk management, our AI implementation is still in early stages We currently have only one AI component in production, which is a third-party chatbot with limited functionality We are in the process of developing an AI governance framework that will include risk assessment methodologies, but this work is scheduled for completion in the next quarter In the interim, we're applying our general security risk assessment processes to AI components, though we acknowledge these aren't specifically tailored to AI-specific risks We plan to adopt the NIST AI Risk Management Framework once our AI footprint expands and would be happy to share our implementation roadmap.

Context

Tab
AI
Category
AI Policy

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron