AILM-03

Do any actions taken by your solution's LLM features or plugins require human intervention?

Explanation

This question is asking whether your LLM-based solution requires human approval or verification before taking certain actions, particularly those that might have security implications. The question focuses on the security control known as 'human-in-the-loop' oversight, which is increasingly important in AI systems. The assessor wants to know if your LLM can autonomously perform actions that might affect systems, data, or permissions without human verification. The guidance specifically mentions 'permissions issues and unauthorized actions,' which indicates concern about LLMs that might have the ability to: 1. Access sensitive data without proper authorization 2. Make changes to systems or data 3. Execute commands or actions with potential security implications 4. Make decisions that should require human judgment This question is being asked because autonomous AI actions represent a significant security risk. If an LLM can take actions without human oversight, it could potentially be manipulated through prompt injection or other techniques to perform unauthorized operations. To best answer this question: 1. Clearly identify which features of your LLM solution can take actions autonomously and which require human approval 2. Describe your human verification workflows for sensitive operations 3. Explain any technical controls that prevent the LLM from taking unauthorized actions 4. Detail any risk assessment you've done regarding autonomous vs. human-approved actions 5. If you have a mix of approaches, explain how you determine which actions require human intervention

Guidance

Looking for human intervention prior to LLM feature actions to mitigate permissions issues and unauthorized actions.

Example Responses

Example Response 1

Yes, our LLM solution requires human intervention for several critical actions Any action that involves modifying data, executing commands, or accessing sensitive information requires explicit human approval through our verification workflow For example, when a user asks our LLM to update customer records, the system generates the proposed changes but places them in a review queue where a human operator must review and approve them before execution Similarly, when the LLM needs to access restricted data sources to fulfill a request, it generates a formal access request that must be approved by an authorized human user We maintain comprehensive logs of all human approvals, including the identity of the approver, timestamp, and the specific action approved This human-in-the-loop approach is a core security principle in our architecture and helps prevent unauthorized actions that could result from prompt manipulation or other attack vectors.

Example Response 2

Yes, our LLM implementation enforces human intervention through a tiered approval system based on action risk levels Low-risk actions like generating text summaries or answering questions from public knowledge operate autonomously Medium-risk actions such as sending notifications or creating draft documents require confirmation through a simple user approval prompt High-risk actions—including any data modifications, API calls to external systems, financial transactions, or access to PII—require formal approval through our Human Authorization Service, which implements a dual-control mechanism requiring two separate human approvers for execution Our system architecture physically separates the LLM's reasoning capabilities from execution permissions, creating an air gap that can only be bridged through cryptographically verified human approval tokens We regularly audit these controls and conduct red team exercises to ensure the LLM cannot bypass human intervention requirements.

Example Response 3

No, our LLM solution does not currently require human intervention before taking actions Our system is designed to operate autonomously to maximize efficiency and response time The LLM can directly query databases, call APIs, and execute workflows based on user requests without human verification steps We mitigate security risks through other means, including strict role-based access controls on the backend services the LLM can access, comprehensive input validation, and continuous monitoring for unusual patterns However, we recognize this represents a potential security gap in our implementation We are currently developing a human approval workflow for high-risk actions that we plan to implement in the next quarter, and in the meantime, we've limited the scope of actions our LLM can perform to reduce potential security impact.

Context

Tab
AI
Category
AI Large Language Model (LLM)

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron