AILM-04

Do you limit multiple LLM model plugins being called as part of a single input?

Explanation

This question is asking whether your organization restricts the number of different LLM plugins that can be called within a single user input or request. LLM plugins are extensions that allow language models to access external tools, data sources, or perform specific functions (like searching the web, accessing databases, or calling APIs). Why this matters for security: 1. Data Leakage Risk: When multiple plugins are called in a single request, sensitive data from one plugin might inadvertently be passed to another plugin, potentially exposing that data to unauthorized systems or users. 2. Privilege Escalation: A chain of plugin calls could potentially be manipulated to gain higher privileges than intended. For example, if one plugin has access to sensitive data and another has the ability to communicate externally, chaining them could create an unintended data exfiltration path. 3. Attack Surface: Each additional plugin increases the attack surface and complexity of security controls needed. The guidance specifically mentions limiting plugins per request to prevent these security issues. A good answer would explain if and how you limit plugin usage, what the maximum number of plugins allowed per request is, and how this limitation is enforced technically.

Guidance

Looking for a limitation of plugins called per request to help limit data leakage and privilege escalation.

Example Responses

Example Response 1

Yes, our LLM implementation strictly limits plugin usage to a maximum of two plugins per user request This limitation is enforced at the API gateway level before requests reach our LLM service Each plugin call is logged and monitored through our security information and event management (SIEM) system Additionally, we've implemented a plugin authorization framework that evaluates the risk of specific plugin combinations and blocks high-risk combinations entirely, even if they fall within the numerical limit This approach significantly reduces the risk of data leakage between plugins and prevents potential privilege escalation chains.

Example Response 2

Yes, we implement a single-plugin-per-request architecture in our LLM system Our technical design prevents multiple plugins from being called in the same context by isolating each plugin call as a separate transaction with its own sandboxed environment This architectural decision was made specifically to address the security concerns around data leakage and privilege escalation All plugin calls are authenticated separately, and data from one plugin execution is never automatically available to another plugin without explicit user interaction and re-authentication, creating natural security boundaries between different plugin functionalities.

Example Response 3

No, our current LLM implementation does not limit the number of plugins that can be called within a single user request Our system is designed to maximize flexibility and functionality for users, allowing them to chain multiple plugins together to accomplish complex tasks We recognize this creates potential security risks around data leakage and privilege escalation, and we're actively developing controls to address these concerns In the interim, we mitigate these risks through comprehensive plugin review processes before deployment, runtime monitoring of plugin behavior, and data loss prevention controls that scan outputs for sensitive information We expect to implement plugin call limitations in our next major release, scheduled for Q3 of this year.

Context

Tab
AI
Category
AI Large Language Model (LLM)

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron