Are your AI developer's policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks conspicuously posted, unambiguous, and implemented effectively?
Explanation
Guidance
Looking for responsible AI development policies and practices.
Example Responses
Example Response 1
Yes, our organization has comprehensive AI risk management policies that are prominently displayed on our internal developer portal and incorporated into our mandatory AI development training Our Responsible AI Framework follows NIST AI Risk Management Framework principles and includes specific procedures for: (1) Mapping risks through mandatory pre-development risk assessments that identify potential ethical, privacy, and security concerns; (2) Measuring risks through our AI Impact Assessment tool that quantifies potential harms across fairness, explainability, robustness, and privacy dimensions; and (3) Managing risks through required mitigation strategies tied to risk levels Compliance is enforced through our development pipeline, which requires sign-off on completed risk assessments before models can be deployed to production We conduct quarterly audits of AI systems and maintain a centralized dashboard showing compliance metrics across all AI projects All developers must complete certification in our Responsible AI practices before working on AI systems.
Example Response 2
Yes, our AI development policies and procedures are implemented through our Ethical AI Governance Program This program is prominently featured on our company intranet with direct links from all AI development environments Our risk management approach includes: (1) Mapping: We maintain an AI risk register that categorizes potential risks by type (bias, privacy, security, etc.) and requires developers to document which risks apply to their specific use case; (2) Measuring: Our AI Scorecard system evaluates each model against 15 risk dimensions with specific metrics for each; (3) Managing: We implement a tiered approval process based on risk scores, with high-risk AI applications requiring review by our AI Ethics Committee We've integrated these processes into our SDLC through automated checkpoints that prevent progression without required documentation Our AI Governance team conducts monthly reviews of all active AI projects and publishes compliance reports to executive leadership We also maintain a public-facing Responsible AI Policy on our corporate website that outlines our commitments to customers.
Example Response 3
No, while we have some general guidelines for AI development, we don't currently have a formal, comprehensive framework specifically for mapping, measuring, and managing AI risks Our development team follows industry best practices and addresses risks on a case-by-case basis, but these practices aren't documented in a centralized policy We recognize this as a gap in our governance structure and are actively working to develop more structured policies We've recently formed an AI Governance Committee that is drafting formal policies based on the NIST AI Risk Management Framework, with expected completion in the next quarter In the interim, we're conducting risk assessments for our highest-priority AI systems and implementing additional oversight for these projects We plan to make these policies prominent in our development environments and conduct training once they're finalized.
Context
- Tab
- AI
- Category
- AI Policy

