AIGN-05

Does your solution support business rules to protect sensitive data from being ingested by the AI model?

Explanation

This question is asking whether your AI solution has mechanisms to prevent sensitive data (like personally identifiable information, financial data, health information, etc.) from being fed into the AI model during training or inference. Why it matters: AI models can inadvertently memorize training data, and if sensitive information is included, this could lead to data leakage or privacy violations. Additionally, if users can input sensitive data during inference, the model might process, store, or even output this data in ways that violate privacy or compliance requirements. 'Business rules' in this context refers to programmatic controls or policies that filter, mask, or block sensitive data before it reaches the AI model. These could include: 1. Input validation and sanitization 2. Pattern matching to detect and redact sensitive information 3. Data classification systems that prevent certain data types from being processed 4. Access controls that limit what data sources the AI can use 5. Prediction limiters that prevent the model from making inferences about sensitive attributes The assessor wants to know if you've implemented technical safeguards to protect sensitive data, not just policies or procedures. They're looking for specific mechanisms that enforce data protection at the technical level. When answering, you should describe your specific technical controls, how they're implemented, how they're tested/verified, and any limitations they might have.

Guidance

Looking for business rules, model assertions, or prediction limiters to mitigate exposure of senstive data through model inputs.

Example Responses

Example Response 1

Yes, our AI solution implements multiple layers of business rules to protect sensitive data from being ingested by our models We employ a three-tiered approach: 1) Pre-processing filters that automatically detect and redact PII, PHI, and financial information using pattern matching and NLP techniques before data reaches our training pipeline; 2) Model input validation that rejects API calls containing detected sensitive information with configurable sensitivity thresholds; and 3) Custom entity recognition models specifically trained to identify industry-specific sensitive data types All identified sensitive data is logged, redacted, and flagged for review Our system also supports customer-defined data dictionaries for organization-specific sensitive terms These controls undergo regular testing through our red team exercises and annual third-party penetration testing.

Example Response 2

Yes, our platform incorporates comprehensive business rules to prevent sensitive data ingestion We've implemented a rule-based engine that scans all inputs against configurable policies before they reach our AI models Customers can define their own sensitivity rules through our administrative console, including regex patterns, keyword lists, and data classification tags that integrate with their existing DLP systems When sensitive data is detected, the system can be configured to either reject the input entirely, redact the sensitive portions, or replace them with synthetic alternatives Additionally, our solution maintains an audit log of all attempted sensitive data submissions for compliance reporting We validate these controls through quarterly assessments and have obtained ISO 27701 certification for our privacy information management system.

Example Response 3

No, our current AI solution does not have specific business rules to protect sensitive data from being ingested by the model While we have general security controls and data handling policies in place, we do not currently employ automated technical controls that specifically filter or block sensitive data before it reaches our AI models We instead rely on administrative controls, including user training and clear usage policies that prohibit uploading sensitive information We recognize this as a limitation in our current implementation and have included the development of technical controls for sensitive data detection and prevention in our product roadmap for the next quarter In the meantime, we recommend customers implement their own data filtering mechanisms before submitting data to our API.

Context

Tab
AI
Category
General AI Questions

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron