Do you have any decision-making processes that are completely automated (i.e., there is no human involvement)?
Explanation
Guidance
Examples of such automated decisions could include automatically denying or approving user access requests, flagging or blocking transactions based on risk scores, or AI-driven decisions that affect user outcomes (e.g., eligibility, grading, pricing).
Example Responses
Example Response 1
Yes, our organization does employ some fully automated decision-making processes Our cloud security platform automatically blocks IP addresses that exhibit suspicious behavior patterns (such as repeated failed login attempts or scanning activities) without human intervention Additionally, our customer onboarding system automatically approves basic tier access for users who meet predefined criteria based on email domain verification and payment processing To mitigate risks associated with these automated systems, we have implemented the following safeguards: 1) All automated blocking decisions can be appealed through our support portal, 2) We conduct monthly audits of automated decisions to identify potential patterns of false positives, 3) Our privacy policy explicitly discloses these automated processes to users, and 4) We maintain detailed logs of all automated decisions for review and compliance purposes.
Example Response 2
No, our organization does not have any completely automated decision-making processes While we utilize automation in various aspects of our operations, all significant decisions that affect users or data subjects include human review and approval steps For example, our access management system flags potential access requests for approval but requires human security personnel to review and approve them Similarly, our threat detection system identifies suspicious activities but escalates them to our security operations team for investigation and response rather than taking automated blocking actions We've deliberately designed our systems this way to ensure appropriate oversight, reduce false positives, and maintain compliance with privacy regulations that govern automated decision-making.
Example Response 3
Our organization currently has one automated decision-making process that operates without human intervention Our fraud detection system automatically declines credit card transactions that trigger multiple high-risk indicators based on our proprietary algorithm We recognize the compliance implications of this system and therefore: 1) Cannot provide customers with the specific scoring criteria as it would compromise our fraud prevention capabilities, 2) Do not currently have a formal appeals process for declined transactions, and 3) Have not conducted a formal bias assessment of the algorithm We are aware that this approach has limitations from a privacy compliance perspective and are working to implement additional controls, including a streamlined appeals process and regular algorithm audits, which we expect to have in place by Q3 of this year.
Context
- Tab
- Privacy
- Category
- Privacy Policies and Procedures

