AIQU-02

Does your solution leverage a large language model (LLM) or do you plan to do so in the next 12 months?

Explanation

This question is asking whether your product or service uses a large language model (LLM) or if you plan to incorporate one within the next year. LLMs are AI systems trained on vast amounts of text data that can generate human-like text, answer questions, translate languages, and perform other language-related tasks. Examples include GPT-4, LLaMA, Claude, and similar models. This question serves as a trigger for additional LLM-specific security questions in the HECVAT assessment. If you answer 'yes,' you'll need to address further questions about how you're managing the specific risks associated with LLMs. Why this matters for security assessment: 1. LLMs introduce unique security and privacy risks, including potential data leakage, generation of harmful content, and vulnerabilities to prompt injection attacks 2. LLMs may process sensitive institutional data in ways that could violate privacy regulations 3. The use of LLMs may involve third-party services that need additional security scrutiny 4. LLMs can produce incorrect or misleading information (hallucinations) that could impact system reliability When answering, be straightforward about your current and planned use of LLMs. If you use them, be prepared to explain your risk mitigation strategies in subsequent questions. If you don't use them and have no plans to do so, simply state that. If you're uncertain about future plans, it's better to answer 'yes' to ensure all potential risks are addressed.

Guidance

Trigger for LLM Questions

Example Responses

Example Response 1

Yes, our solution currently leverages OpenAI's GPT-4 model to power our customer support chatbot and to generate summaries of user-submitted support tickets We also use a fine-tuned version of Meta's LLaMA 2 model for internal document classification and routing We have implemented various safeguards including prompt engineering guardrails, human review processes, and data filtering to mitigate risks associated with these LLMs.

Example Response 2

No, our current solution does not use any large language models However, we are planning to integrate an LLM-based feature in our product roadmap for Q3 of next year Specifically, we're exploring using Azure OpenAI Service to provide content summarization and natural language search capabilities across our knowledge base We are currently conducting security and privacy impact assessments for this planned implementation.

Example Response 3

No, our solution does not currently leverage any large language models, nor do we have plans to implement LLM technology in the next 12 months Our product focuses on structured data processing and analytics using traditional algorithmic approaches While we continuously evaluate emerging technologies, we have determined that LLMs do not currently align with our product strategy or customer needs If our roadmap changes to include LLM integration, we will update our security documentation and conduct appropriate risk assessments.

Context

Tab
AI
Category
AI Qualifying Questions

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron