DPAI-03

Do you have agreements in place with third parties or subprocessors regarding the protection of customer data and use of AI?

Explanation

This question is asking whether your organization has formal agreements with any third parties or subprocessors that specifically address how customer data is protected when used in AI systems. As AI systems often require large amounts of data for training and operation, there are specific privacy and security concerns about how this data is handled. The question is being asked in a security assessment because: 1. Data Protection Responsibility: When you share customer data with third parties or subprocessors who may use AI systems, you remain responsible for ensuring that data is properly protected. 2. AI-Specific Risks: AI systems present unique privacy challenges, such as the potential for data to be inadvertently embedded in models, used for purposes beyond the original intent, or retained longer than necessary. 3. Regulatory Compliance: Many privacy regulations (like GDPR, CCPA) require organizations to have proper agreements with data processors, and AI usage adds complexity to these requirements. 4. Supply Chain Security: Your security posture is only as strong as your weakest link, which could be a third party using customer data in their AI systems. To best answer this question, you should: 1. Identify all third parties and subprocessors that have access to customer data and might use AI. 2. Verify whether your agreements with these entities specifically address AI usage and data protection. 3. Highlight specific contractual terms related to data protection in AI contexts (data minimization, purpose limitation, retention policies, etc.). 4. Mention any additional safeguards you require from these parties when using AI with customer data. 5. Describe your monitoring and enforcement mechanisms for these agreements.

Example Responses

Example Response 1

Yes, we have comprehensive Data Processing Agreements (DPAs) with all third parties and subprocessors who may access customer data, with specific provisions addressing AI usage These agreements include: (1) Explicit limitations on how customer data can be used for AI training and inference; (2) Prohibitions against using customer data to train general-purpose AI models without explicit consent; (3) Requirements for data minimization and purpose limitation when processing data with AI systems; (4) Mandatory security controls for AI systems processing customer data; (5) Rights to audit AI usage of customer data; and (6) Clear data deletion requirements, including verification that customer data is not retained in AI models after termination We review these agreements annually and require all subprocessors to provide attestation of compliance quarterly.

Example Response 2

Yes, we maintain robust agreements with our third parties and subprocessors regarding customer data protection in AI contexts Our Master Service Agreements include AI-specific addenda that: (1) Require all AI systems to be inventoried and risk-assessed before processing customer data; (2) Mandate that subprocessors implement technical safeguards to prevent model memorization of sensitive customer information; (3) Require transparency about what customer data is used for which AI purposes; (4) Establish clear ownership rights regarding AI models trained on customer data; and (5) Specify breach notification procedures specific to AI-related incidents Additionally, we conduct technical validation of these controls through annual penetration testing and AI model evaluation to verify compliance with these contractual requirements.

Example Response 3

No, we currently do not have specific agreements in place with our third parties and subprocessors regarding AI usage of customer data While our standard data processing agreements cover general data protection requirements, they do not contain provisions specifically addressing AI-related concerns such as model training limitations, inference usage, or prevention of data memorization in models We recognize this as a gap in our security and privacy program and are currently working with our legal team to develop AI-specific contract addenda for all relevant vendors In the interim, we have implemented a policy prohibiting the sharing of customer data with any third party that uses AI systems until appropriate contractual protections are in place We expect to have updated agreements implemented within the next 90 days.

Context

Tab
Privacy
Category
Privacy and AI

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron