HECVAT Category

AI Policy

AI Policy covers controls and questions related to that domain. It outlines expectations institutions typically require from vendors. The category helps assess risk posture and operational maturity. It provides structure for consistent evaluation during security reviews.

Assessment Questions

AIPL-01

Are your AI developer's policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks conspicuously posted, unambiguous, and implemented effectively?

This question is asking whether your organization has clear, accessible, and effective policies, processes, procedures, and practices for managing AI risks throughout the development lifecycle. It specifically focuses on three key aspects: mapping (identifying), measuring (assessing), and managing (mitigating) AI risks.

AIPL-02

Have you identified and measured AI risks?

This question is asking whether your organization has formally identified potential risks associated with your AI systems and established methods to measure or quantify these risks.

AIPL-03

In the event of an incident, can your solution's AI features be disabled in a timely manner?

This question is asking whether your AI-powered solution can be quickly disabled if a security incident occurs. In the context of security, an 'incident' could be a data breach, a model behaving unexpectedly (producing harmful outputs), or the discovery of a vulnerability in the AI system.

AIPL-04

If disabled because of an incident, can your solution's AI features be re-enabled in a timely manner?

This question is asking about your organization's ability to restore AI functionality after it has been disabled due to a security incident. In the context of security compliance, this relates to business continuity and incident response capabilities.

AIPL-05

Do you have documented technical and procedural processes to address potential negative impacts of AI as described by the AI Risk Management Framework (RMF)?

This question is asking whether your organization has formal, documented processes to identify, assess, and mitigate potential negative impacts or harms that could arise from your AI systems. The NIST AI Risk Management Framework (RMF) is a voluntary framework published by the National Institute of Standards and Technology that provides guidance on managing risks in the design, development, use, and evaluation of AI systems.

ResponseHub is the product I wish I had when I was a CTO

Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.

As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!

I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

Signature
Neil Cameron
Founder, ResponseHub
Neil Cameron