Six questions. A governance framework your steering committee can own and publish today.
Select all that apply.
Why only business/enterprise plans? Free and individual plans (Claude Pro, ChatGPT Plus, etc.) use your data to train their models by default. The plans below contractually exclude your data from training.
Red data must never enter external AI tools without explicit steering committee approval. Select all that apply.
HAIL Framework Reference
Human Integration (H)
AI Access (A)
Most organisations start at H1/A1 and graduate upward as they prove value.
Set the minimum approver required for each risk tier.
One step up (H1/A2 or H2/A1)
Standard use case expansion — slightly more autonomy or system read access
Two steps up (H2/A2 or H1/A3)
AI reads systems and produces output, or has write access with human oversight
High risk (H3 any level, or H2/A3)
Autonomous AI operation or write access with minimal human oversight
These appear as named obligations in your policy. Most organisations select all three.
The AI risk landscape changes constantly. Quarterly is strongly recommended.
Answer the questions on the left to build your policy preview.