HECVAT Category
AI Machine Learning
AI Machine Learning covers controls and questions related to that domain. It outlines expectations institutions typically require from vendors. The category helps assess risk posture and operational maturity. It provides structure for consistent evaluation during security reviews.
Assessment Questions
Do you separate ML training data from your ML solution data?
This question is asking whether your organization maintains a separation between the data used to train your machine learning (ML) models and the data that your ML solution processes in production.
Do you authenticate and verify your ML model's feedback?
This question is asking whether your organization has mechanisms in place to authenticate and verify the feedback that is used to train, update, or improve your machine learning (ML) models.
Is your ML training data vetted, validated, and verified before training the solution's AI model?
This question is asking whether your organization has a formal process to ensure the quality, accuracy, and integrity of the data used to train machine learning models before that data is used in training.
Is your ML training data monitored and audited?
This question is asking whether your organization has processes in place to monitor and audit the data used to train your machine learning (ML) models.
Have you limited access to your ML training data to only staff with an explicit business need?
This question is asking whether your organization restricts access to machine learning (ML) training data to only those employees who have a legitimate business reason to access it.
Have you implemented adversarial training or other model defense mechanisms to protect your ML-related features?
This question is asking whether your organization has implemented specific security measures to protect machine learning (ML) models against adversarial attacks.
Do you make your ML model transparent through documentation and log inputs and outputs?
This question is asking whether your organization provides transparency into how your machine learning (ML) models work by documenting them and logging their inputs and outputs.
Do you watermark your ML training data?
This question is asking whether your organization applies watermarks to the data used to train your machine learning (ML) models.
ResponseHub is the product I wish I had when I was a CTO
Previously I was co-founder and CTO of Progression, a VC backed HR-tech startup used by some of the biggest names in tech.
As our sales grew, security questionnaires quickly became one of my biggest pain-points. They were confusing, hard to delegate and arrived like London busses - 3 at a time!
I'm building ResponseHub so that other teams don't have to go through this. Leave the security questionnaires to us so you can get back to closing deals, shipping product and building your team.

