Do you limit multiple LLM model plugins being called as part of a single input?
Explanation
Guidance
Looking for a limitation of plugins called per request to help limit data leakage and privilege escalation.
Example Responses
Example Response 1
Yes, our LLM implementation strictly limits plugin usage to a maximum of two plugins per user request This limitation is enforced at the API gateway level before requests reach our LLM service Each plugin call is logged and monitored through our security information and event management (SIEM) system Additionally, we've implemented a plugin authorization framework that evaluates the risk of specific plugin combinations and blocks high-risk combinations entirely, even if they fall within the numerical limit This approach significantly reduces the risk of data leakage between plugins and prevents potential privilege escalation chains.
Example Response 2
Yes, we implement a single-plugin-per-request architecture in our LLM system Our technical design prevents multiple plugins from being called in the same context by isolating each plugin call as a separate transaction with its own sandboxed environment This architectural decision was made specifically to address the security concerns around data leakage and privilege escalation All plugin calls are authenticated separately, and data from one plugin execution is never automatically available to another plugin without explicit user interaction and re-authentication, creating natural security boundaries between different plugin functionalities.
Example Response 3
No, our current LLM implementation does not limit the number of plugins that can be called within a single user request Our system is designed to maximize flexibility and functionality for users, allowing them to chain multiple plugins together to accomplish complex tasks We recognize this creates potential security risks around data leakage and privilege escalation, and we're actively developing controls to address these concerns In the interim, we mitigate these risks through comprehensive plugin review processes before deployment, runtime monitoring of plugin behavior, and data loss prevention controls that scan outputs for sensitive information We expect to implement plugin call limitations in our next major release, scheduled for Q3 of this year.
Context
- Tab
- AI
- Category
- AI Large Language Model (LLM)

