AIPL-03

In the event of an incident, can your solution's AI features be disabled in a timely manner?

Explanation

This question is asking whether your AI-powered solution can be quickly disabled if a security incident occurs. In the context of security, an 'incident' could be a data breach, a model behaving unexpectedly (producing harmful outputs), or the discovery of a vulnerability in the AI system. Why this matters: AI systems can potentially amplify security risks if compromised. For example, if an AI chatbot starts leaking sensitive data or a recommendation engine begins promoting harmful content due to an attack, organizations need a way to quickly 'pull the plug' to minimize damage. This is similar to having an emergency shutdown procedure for any critical system. The question specifically asks for the timeframe needed to disable AI features, which helps the assessor understand if you have a well-defined process for emergency situations. A 'timely manner' generally means minutes to hours, not days. To best answer this question: 1. Describe your specific incident response procedure for AI components 2. Explain the technical mechanisms for disabling AI features (e.g., API kill switches, feature flags) 3. Provide concrete timeframes for how quickly disabling can occur 4. Mention any testing or drills you conduct to ensure these procedures work 5. Explain how you would communicate with customers during such an event 6. Detail how you would safely re-enable features after addressing the incident

Guidance

Looking for incident response procedure for shutting down and re-enabling model features due to a security event. Please provide the amount of time it would take to disable your solution's AI feature(s).

Example Responses

Example Response 1

Yes, our solution's AI features can be disabled within 5 minutes of incident detection We maintain a centralized feature flag system that allows our security operations team to immediately disable AI processing components without affecting other system functionality This is accomplished through our incident response platform where authorized personnel can trigger pre-configured 'break glass' procedures that disable specific AI models or all AI functionality across the platform We test this capability quarterly during our disaster recovery exercises Once disabled, all AI-dependent functions gracefully degrade to rule-based alternatives or display appropriate maintenance notifications Re-enabling requires both security and engineering approval following our incident remediation process, which typically takes 1-4 hours depending on the severity of the incident.

Example Response 2

Yes, our AI features can be disabled in a timely manner through multiple mechanisms Our primary method is an emergency circuit breaker accessible to our 24/7 Security Operations Center that can disable all AI processing within 10 minutes of incident detection Additionally, individual customers can disable AI features for their own instances through their admin console with immediate effect Our architecture separates AI components from core functionality, allowing the base application to continue functioning with reduced capabilities when AI is disabled We maintain runbooks for various AI-related incident scenarios and conduct monthly drills to ensure our response team can execute these procedures efficiently Following an incident, we perform a thorough investigation and testing in our staging environment before re-enabling AI features, which typically occurs within 24 hours for minor incidents or 72 hours for major incidents.

Example Response 3

No, our current architecture does not allow for immediate disabling of AI features in response to an incident Our AI capabilities are deeply integrated into the core application stack, and disabling them would require a full application restart and potential data migration, which could take 24-48 hours to complete safely While we have incident response procedures for security events, they focus on containment and investigation rather than feature disablement We recognize this as a limitation in our current design and are working to implement a more granular control system that would allow us to disable specific AI components without affecting the entire application Our roadmap includes developing this capability within the next two quarters, after which we expect to be able to disable AI features within 30 minutes of incident detection.

Context

Tab
AI
Category
AI Policy

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron