If sensitive data is introduced to your solution's AI model, can the data be removed from the AI model by request?
Explanation
Guidance
Looking for the ability to scrub sensitive insitutional data from your solution's AI model.
Example Responses
Example Response 1
Yes, our solution has comprehensive data removal capabilities When sensitive data is identified in our AI models, we implement a multi-step removal process: 1) We immediately pause the model's use of the identified data, 2) We remove the data from all training datasets, 3) We retrain the affected models without the sensitive data, and 4) We replace deployed models with the newly trained versions This process typically takes 3-5 business days to complete We provide verification documentation confirming the data has been removed from all model instances Additionally, our architecture maintains data lineage tracking that allows us to identify all models potentially affected by specific data points, ensuring comprehensive removal.
Example Response 2
Yes, our AI solution supports data removal requests through our 'Data Governance Portal.' When a removal request is received, our system identifies all instances where the sensitive data was used For foundation models, we use machine unlearning techniques to selectively remove the influence of the specified data without requiring full model retraining For custom models built on customer data, we completely retrain these models excluding the identified data We maintain comprehensive logs of all data used in training and can provide attestation that the requested data has been removed The removal process is typically completed within 7 business days, and we implement technical controls to prevent the reintroduction of removed data in future training cycles.
Example Response 3
No, our current AI architecture does not support the selective removal of specific data points once they have been incorporated into our models Our models are trained on aggregated datasets, and the training process transforms individual data points in ways that make them inseparable from the overall model parameters While we implement strict data filtering before training to prevent sensitive data from entering our models, if sensitive data were to be introduced, a complete model replacement would be required rather than selective data removal We are currently developing a new architecture with fine-grained data lineage tracking that will support selective data removal in future versions, expected to be available in Q3 of next year.
Context
- Tab
- AI
- Category
- AI Data Security

