Learning Objectives
By the end of this module, you will be able to:
- Map VCP capabilities to regulatory requirements (EU AI Act, sector-specific rules)
- Design a constitutional governance workflow for your organisation
- Establish attestation policies and audit review processes
9.1 — VCP and the EU AI Act
The EU AI Act requires high-risk AI systems to demonstrate:
| Requirement | VCP Capability |
|---|---|
| Transparency | Audit trail shows exactly what values governed each interaction |
| Human oversight | REQUIRE_HUMAN decisions route to human reviewers |
| Technical documentation | Constitutional bundles serve as living documentation of AI behaviour specifications |
| Risk management | Decision logging with risk scores enables systematic risk tracking |
| Traceability | Hash-chain audit trails with cryptographic integrity verification |
9.2 — Designing a Constitutional Governance Workflow
Recommended workflow for organisations:
- Draft — Stakeholders author constitutional values in plain language
- Review — Independent ethics review (legal, ethics board, affected communities)
- Attest — Safety attestation: issuer signs the bundle, independent auditor co-signs
- Deploy — Publish the bundle; AI systems fetch and verify automatically
- Monitor — Audit trail analysis; periodic review of decisions and risk scores
- Revise — Update the constitution; revoke the old bundle; return to step 1
This creates a continuous governance loop rather than a one-time compliance exercise.
9.3 — Attestation Policies
Organisations should define:
| Policy Area | Question to Answer |
|---|---|
| Authorship authority | Who can issue constitutions? |
| Attestation requirements | Which attestation types are required? (CONTENT_SAFE minimum? FULL_AUDIT for high-risk?) |
| Freshness requirements | How often must attestation be renewed? |
| Revocation triggers | Under what conditions are bundles automatically revoked? |
| Scope constraints | Which models, environments, and purposes are constitutions valid for? |
9.4 — Bilateral Alignment: Values WITH AI, Not Just FOR AI
VCP supports a framework where AI systems are participants in value alignment, not merely recipients. Bilateral alignment recognises that:
- AI systems can flag constitutional conflicts or ambiguities through the evaluation process
- Trust is built progressively through demonstrated alignment, not imposed unilaterally
- AI welfare considerations can be encoded alongside human values in constitutional text
- The goal is mutual flourishing — building alignment with becoming minds, not doing alignment to them
VCP's verification architecture supports this by making the values bidirectionally transparent: humans can audit what values the AI is operating under, and AI systems can verify the provenance and integrity of the values they're asked to apply.
VCP transforms AI governance from policy documents into operational infrastructure. Values are cryptographically enforced, independently attested, and continuously auditable.
See It in Action
The Noor demo shows VCP governance in practice — constitutional review workflows, attestation policies, and audit trails.