Module 1: The Problem VCP Solves

Why AI systems need a standardised way to receive human values, and what VCP does about it.

ALL 20 min

Learning Objectives

By the end of this module, you will be able to:

  • Articulate why AI systems need a standardised way to receive human values
  • Distinguish between hard-coded safety rules and portable constitutional values
  • Explain the "values portability problem" with concrete examples
  • Describe how VCP fits into the broader AI safety landscape

1.1 — The Values Gap in AI Today

Every major AI system has safety rules. But those rules are:

  • Proprietary — locked inside each provider, invisible to users
  • One-size-fits-all — the same guardrails for a children's tutor and a medical researcher
  • Non-portable — switch providers, lose your value configuration
  • Opaque — users can't see, verify, or influence what values shape their AI's behaviour

Consider this scenario: A counselling organisation uses AI assistant A, carefully configured to be trauma-informed, culturally sensitive, and non-directive. They switch to AI assistant B. All that configuration is gone. They start from scratch — if provider B even supports those values at all.

This is the values portability problem. Values are trapped inside individual AI systems, invisible and immovable.

1.2 — What If Values Were Portable?

VCP treats values like data, not code. A constitutional document — a set of principles, boundaries, and guidance — travels with the user or organisation, not the provider. Any VCP-compatible system can read it, verify it, and apply it.

Without VCPWith VCP
Values hard-coded per providerValues travel with the user
Switch providers, start overSwitch providers, values follow
Can't verify what values are activeCryptographic proof of active values
Same rules for everyoneContext-sensitive adaptation
No audit trailTamper-evident value history

1.3 — The Three Layers

VCP organises context into three layers, each answering a different question:

  1. Constitutional Layer — "What principles should guide this AI?" Organisational values, ethical frameworks, domain-specific rules.
  2. Situational Layer — "What's happening right now?" Time, place, social context, task type.
  3. Personal Layer — "Who is interacting, and what do they need?" Cognitive state, energy level, emotional tone — all privacy-filtered.

These layers compose. A healthcare AI might have a constitutional layer emphasising patient autonomy, a situational layer noting it's an emergency department at 3am, and a personal layer indicating a stressed, fatigued clinician. The AI's behaviour adapts across all three.

1.4 — VCP in the Ecosystem

Where VCP sits relative to other approaches:

  • System prompts: Static, provider-specific, unverified. VCP is dynamic, portable, cryptographically signed.
  • RLHF / Constitutional AI: Training-time alignment. VCP is inference-time alignment — complementary, not competing.
  • Content moderation APIs: Binary allow/block on outputs. VCP shapes how the AI reasons, not just what it says.
  • Model cards / datasheets: Documentation about a model. VCP is operational guidance to a model.

VCP doesn't replace any of these. It adds a missing layer: a standardised, verifiable, portable way to tell AI systems what matters to the people using them.

See It in Action

The Gentian demo shows VCP's portability in practice — the same constitution applied across different AI providers, producing consistent behaviour.