AISC-01

If sensitive data is introduced to your solution's AI model, can the data be removed from the AI model by request?

Explanation

This question is asking whether your AI solution has the capability to remove specific sensitive data from its training or operational models if requested to do so. This is important because: 1. Data privacy regulations like GDPR and CCPA include 'right to be forgotten' provisions that may require removing an individual's data upon request. 2. If sensitive institutional data (like proprietary information, PII, or protected health information) inadvertently gets incorporated into an AI model, the institution needs a way to remove that data to prevent potential data leakage or misuse. 3. AI models can memorize training data, especially unique or repeated information, which creates risk if that data is sensitive. The question is assessing whether your AI solution has technical safeguards to honor data removal requests and protect against data persistence in models. This capability is increasingly important as AI regulations evolve and organizations become more concerned about how their data is used in AI systems. When answering, you should be specific about: - Whether data removal is technically possible in your solution - The process and timeframe for removing data - Any limitations to data removal capabilities - How you verify the data has been completely removed - Whether removal affects only future processing or can also address data already incorporated into models

Guidance

Looking for the ability to scrub sensitive insitutional data from your solution's AI model.

Example Responses

Example Response 1

Yes, our solution has comprehensive data removal capabilities When sensitive data is identified in our AI models, we implement a multi-step removal process: 1) We immediately pause the model's use of the identified data, 2) We remove the data from all training datasets, 3) We retrain the affected models without the sensitive data, and 4) We replace deployed models with the newly trained versions This process typically takes 3-5 business days to complete We provide verification documentation confirming the data has been removed from all model instances Additionally, our architecture maintains data lineage tracking that allows us to identify all models potentially affected by specific data points, ensuring comprehensive removal.

Example Response 2

Yes, our AI solution supports data removal requests through our 'Data Governance Portal.' When a removal request is received, our system identifies all instances where the sensitive data was used For foundation models, we use machine unlearning techniques to selectively remove the influence of the specified data without requiring full model retraining For custom models built on customer data, we completely retrain these models excluding the identified data We maintain comprehensive logs of all data used in training and can provide attestation that the requested data has been removed The removal process is typically completed within 7 business days, and we implement technical controls to prevent the reintroduction of removed data in future training cycles.

Example Response 3

No, our current AI architecture does not support the selective removal of specific data points once they have been incorporated into our models Our models are trained on aggregated datasets, and the training process transforms individual data points in ways that make them inseparable from the overall model parameters While we implement strict data filtering before training to prevent sensitive data from entering our models, if sensitive data were to be introduced, a complete model replacement would be required rather than selective data removal We are currently developing a new architecture with fine-grained data lineage tracking that will support selective data removal in future versions, expected to be available in Q3 of next year.

Context

Tab
AI
Category
AI Data Security

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron