Have you identified and measured AI risks?
Explanation
Guidance
Looking for documentation and policies around measuring AI risk.
Example Responses
Example Response 1
Yes, we have implemented a comprehensive AI risk management framework based on NIST AI RMF Our AI governance committee conducts quarterly risk assessments for all production AI systems We've identified and categorized risks including model drift, data poisoning vulnerabilities, algorithmic bias, and explainability challenges specific to each model Each risk is measured using defined metrics - for example, we measure bias using statistical parity difference and equal opportunity difference across protected attributes, with thresholds set at <5% variance For security risks, we conduct adversarial testing quarterly with documented attack vectors and success rates All risk assessments are documented in our AI Risk Register, which tracks risk levels, mitigations, and trends over time We can provide our AI Risk Assessment Methodology document and a redacted risk register as evidence.
Example Response 2
Yes, our organization has established an AI Risk Management Program as of Q2 2023 We've conducted an initial risk assessment for our three AI applications using Microsoft's Responsible AI Assessment framework Key identified risks include: 1) Data privacy concerns with our recommendation engine (measured via PII exposure rate), 2) Potential for unfair outcomes in our HR screening tool (measured via demographic performance disparity), and 3) Explainability gaps in our financial decision support system (measured via explanation satisfaction rating from users) Each risk has assigned metrics with quarterly measurement cycles and defined thresholds that trigger remediation actions Our AI Ethics Committee reviews these measurements monthly, and we maintain a centralized AI Risk Dashboard While our program is relatively new, we've completed baseline measurements for all identified risks and established improvement targets for the next 12 months.
Example Response 3
No, we have not yet formally identified and measured AI risks While we recognize the importance of AI risk management, our AI implementation is still in early stages We currently have only one AI component in production, which is a third-party chatbot with limited functionality We are in the process of developing an AI governance framework that will include risk assessment methodologies, but this work is scheduled for completion in the next quarter In the interim, we're applying our general security risk assessment processes to AI components, though we acknowledge these aren't specifically tailored to AI-specific risks We plan to adopt the NIST AI Risk Management Framework once our AI footprint expands and would be happy to share our implementation roadmap.
Context
- Tab
- AI
- Category
- AI Policy

