Do any actions taken by your solution's LLM features or plugins require human intervention?
Explanation
Guidance
Looking for human intervention prior to LLM feature actions to mitigate permissions issues and unauthorized actions.
Example Responses
Example Response 1
Yes, our LLM solution requires human intervention for several critical actions Any action that involves modifying data, executing commands, or accessing sensitive information requires explicit human approval through our verification workflow For example, when a user asks our LLM to update customer records, the system generates the proposed changes but places them in a review queue where a human operator must review and approve them before execution Similarly, when the LLM needs to access restricted data sources to fulfill a request, it generates a formal access request that must be approved by an authorized human user We maintain comprehensive logs of all human approvals, including the identity of the approver, timestamp, and the specific action approved This human-in-the-loop approach is a core security principle in our architecture and helps prevent unauthorized actions that could result from prompt manipulation or other attack vectors.
Example Response 2
Yes, our LLM implementation enforces human intervention through a tiered approval system based on action risk levels Low-risk actions like generating text summaries or answering questions from public knowledge operate autonomously Medium-risk actions such as sending notifications or creating draft documents require confirmation through a simple user approval prompt High-risk actions—including any data modifications, API calls to external systems, financial transactions, or access to PII—require formal approval through our Human Authorization Service, which implements a dual-control mechanism requiring two separate human approvers for execution Our system architecture physically separates the LLM's reasoning capabilities from execution permissions, creating an air gap that can only be bridged through cryptographically verified human approval tokens We regularly audit these controls and conduct red team exercises to ensure the LLM cannot bypass human intervention requirements.
Example Response 3
No, our LLM solution does not currently require human intervention before taking actions Our system is designed to operate autonomously to maximize efficiency and response time The LLM can directly query databases, call APIs, and execute workflows based on user requests without human verification steps We mitigate security risks through other means, including strict role-based access controls on the backend services the LLM can access, comprehensive input validation, and continuous monitoring for unusual patterns However, we recognize this represents a potential security gap in our implementation We are currently developing a human approval workflow for high-risk actions that we plan to implement in the next quarter, and in the meantime, we've limited the scope of actions our LLM can perform to reduce potential security impact.
Context
- Tab
- AI
- Category
- AI Large Language Model (LLM)

