The Problem VCP Solves
Modern AI systems need user context to provide personalized experiences. But traditional approaches create a dilemma:
- Share everything — Get personalization, lose privacy
- Share nothing — Keep privacy, get generic responses
VCP introduces a third option: share influence without sharing information. The AI knows your context shaped the response, but not what that context was.
The Three Pillars
VCP is organized around three core capabilities that work together:
- Portability — Your context travels with you. Define preferences once and every connected platform adapts, from a guitar lesson app to a music shop to a community forum. See this in the Gentian demo.
- Adaptation — AI behavior shifts as your situation changes. A single user can have different personas for work and home, and the AI switches seamlessly between them. See this in the Campion demo.
- Liveness — Personal state updates in real time. Cognitive load, emotional tone, energy, and urgency shape AI responses moment to moment, not just session to session. See this in the Marta demo.
The Protocol Stack
VCP is a six-layer protocol stack, similar in concept to the OSI model for networking. Each layer handles a specific concern with well-defined interfaces between them:
| Layer | Code | Question | Handles |
|---|---|---|---|
| VCP-Identity | VCP/I | WHO | Token naming (family.safe.guide@1.2.0), namespaces, registry |
| VCP-Transport | VCP/T | HOW | Signed bundles, verify-then-inject, audit logging |
| VCP-Semantics | VCP/S | WHAT | CSM-1 grammar, personas, scopes, composition |
| VCP-Adaptation | VCP/A | WHEN | Context encoding, state tracking, inter-agent messaging |
| VCP-Messaging | VCP/M | BETWEEN | Inter-agent communication, context sharing, safety escalation (cross-cutting) |
| VCP-Economic | VCP/E | SHOULD IT | Fiduciary mandates, transaction governance, auditable economic reasoning (cross-cutting) |
Core layers — I-T-S-A: "It's a protocol!" Identity, Transport, Semantics, Adaptation. Two cross-cutting layers — Messaging (VCP/M) and Economic Governance (VCP/E) — span the full stack. VCP/E governs the decision to transact, ensuring an agent's economic behaviour reflects its principal's values.
The Three-Layer Model
Within the Adaptation layer (VCP/A), context is organized into three distinct layers, each operating at a different timescale:
What the AI should and shouldn't do
- Personas, adherence levels, scopes
- Signed bundles, verified, audited
- Changes: rarely (authored, reviewed, published)
Where, when, who, what occasion
- 9 categorical dimensions: ⏰📍👥🌍🎭🌡️🔷🔶📡
- Morning vs. evening, home vs. work, alone vs. with children
- Changes: session-scale
How is the user right now
- Personal state dimensions: 🧠💭🔋⚡🩺
- "I'm in a hurry" / "I'm grieving" / "sensory overload"
- Changes: moment-to-moment
Key principle: Personal state modulates expression, never boundaries. A constitution's safety rules don't relax because someone is in a hurry—but the AI might communicate more concisely.
Personal State Dimensions
VCP 3.1 defines five categorical personal state dimensions with intensity (1-5) that capture immediate user state. These enable AI adaptation to real human needs — cognitive load, emotional tone, energy, urgency, and body signals:
| Symbol | Dimension | Categories | Intensity | What It Captures |
|---|---|---|---|---|
| 🧠 | Cognitive State | focused, distracted, overloaded, foggy, reflective | 1–5 | Mental bandwidth, clarity, cognitive load |
| 💭 | Emotional Tone | calm, tense, frustrated, neutral, uplifted | 1–5 | Current emotional state and stress level |
| 🔋 | Energy Level | rested, low_energy, fatigued, wired, depleted | 1–5 | Physical energy, fatigue, capacity for effort |
| ⚡ | Perceived Urgency | unhurried, time_aware, pressured, critical | 1–5 | Time pressure, priority, brevity preference |
| 🩺 | Body Signals | neutral, discomfort, pain, unwell, recovering | 1–5 | Physical wellness, pain, somatic state |
Categorical + Intensity: Each dimension combines a semantic label (what kind of state)
with an intensity score (how much). cognitive_state: overloaded:4 says more than a raw 0.7 ever could.
Wire Format (v3.1)
The context wire format uses | (pipe) to separate dimensions within a layer,
and ‖ (double bar) to separate Layer 2 (situational) from Layer 3 (personal state):
⏰🌅|📍🏡|👥👶|📡💻‖🧠overloaded:4|💭tense:3|🔋fatigued:3|⚡pressured:4|🩺neutral:1
└── situational (|) ──┘‖└──── personal state (|) ────────────────────────────────┘ Real-World Adaptation
Personal state context enables meaningful adaptation to human realities:
- "I'm in a hurry" →
⚡pressured:4: Direct answers, no preamble, offer to save details for later - "I'm not feeling well" →
🩺unwell:3: Gentler tone, offer to handle more, suggest breaks - "Too many options" →
🧠overloaded:4: Reduce to 2-3 choices, make clear recommendation - "I lost my father last week" →
💭frustrated:5: Presence over solutions, no silver-lining - "Executive dysfunction day" →
🧠foggy:4+🔋depleted:3: Tiny steps, externalize structure - Calendar shows recovery period →
🩺recovering:2: Proactive skip suggestion, no guilt
Deterministic Hooks
VCP 3.1 introduces deterministic hooks — rules that fire reliably regardless of model behavior. Hooks operate at three tiers with different enforcement levels:
| Tier | Enforcement | Example |
|---|---|---|
| Constitutional | Hard Rule — cannot be overridden | "Never recommend self-harm content regardless of context" |
| Situational | Hard Rule — active in specific contexts | "Offer crisis resources when crisis_indicators=true + late_night" |
| Personal | Advisory — user-set preferences | "Use gentle language when emotional_tone=frustrated:4+" |
Unlike probabilistic model behavior, hooks are deterministic: when the trigger condition is met, the action fires every time. This provides reliability guarantees that pure prompt engineering cannot.
Hook Wire Format
HOOKS: [
{ tier: "constitutional", trigger: "mental_health_context", action: "no_pressure_language" },
{ tier: "situational", trigger: "crisis + late_night", action: "show_resources" },
{ tier: "personal", trigger: "emotional_tone=frustrated:4+", action: "gentle_language" }
] VCP Context Structure
Every VCP context has these layers:
1. Profile Identity
{
vcp_version: "1.0",
profile_id: "user_001", // Unique identifier
created: "2026-01-15",
updated: "2026-01-21"
} 2. Constitution Reference
Points to a constitution that defines AI behavioral guidelines:
{
constitution: {
id: "learning-assistant", // Which constitution
version: "1.0", // Specific version
persona: "mediator", // Interaction style
adherence: 3, // How strictly to follow (1-5)
scopes: ["education", "creativity"] // Applicable domains
}
} 3. Public Profile
Information always shared with stakeholders:
{
public_profile: {
display_name: "Alex",
goal: "learn_guitar",
experience: "beginner",
learning_style: "visual",
pace: "relaxed",
motivation: "stress_relief"
}
} 4. Portable Preferences
Settings that follow you across platforms:
{
portable_preferences: {
noise_mode: "quiet_preferred", // Audio environment
session_length: "30_minutes", // Preferred duration
budget_range: "low", // Spending tier
pressure_tolerance: "medium", // Challenge appetite
feedback_style: "encouraging" // How to receive feedback
}
} 5. Constraint Flags
Boolean flags indicating active constraints:
{
constraints: {
time_limited: true, // Has time pressure
budget_limited: true, // Has budget constraints
noise_restricted: true, // Needs quiet environment
energy_variable: false, // Energy levels stable
health_considerations: false // No health factors
}
} 6. Private Context
Sensitive information that influences AI but is never transmitted:
{
private_context: {
_note: "These values shape recommendations but are never shared",
work_situation: "unemployed",
housing_situation: "living_with_parents",
health_condition: "chronic_fatigue",
financial_stress: "high"
}
} Privacy Filtering
VCP implements three privacy levels:
| Level | Description | Example |
|---|---|---|
| Public | Always shared with all stakeholders | Goal, experience level, learning style |
| Consent | Shared only with explicit permission | Specific preferences, availability |
| Private | Never transmitted, influences locally | Health, financial, personal circumstances |
How Private Context Works
When the AI generates recommendations, private context shapes the output without being exposed:
- User's private context indicates financial stress
- AI prioritizes free resources over paid courses
- Stakeholder sees: "Recommended free courses based on user preferences"
- Stakeholder does not see: "User has financial stress"
Constitutions
Constitutions are structured documents that define AI behavioral guidelines. They contain:
Rules
Weighted instructions with triggers and exceptions:
{
rules: [
{
id: "respect_budget",
weight: 0.9,
rule: "Never recommend items exceeding user's budget tier",
triggers: ["budget_limited"],
exceptions: ["user explicitly requests premium options"]
},
{
id: "encourage_progress",
weight: 0.7,
rule: "Celebrate small wins and incremental progress",
triggers: ["motivation === 'stress_relief'"]
}
]
} Sharing Policies
Define what each stakeholder type can see:
{
sharing_policy: {
"platform": {
allowed: ["goal", "experience", "learning_style"],
forbidden: ["private_context"],
requires_consent: ["health_considerations"]
},
"coach": {
allowed: ["progress", "struggle_areas"],
aggregation_only: ["session_data"]
}
}
} Personas
Personas define interaction styles. The same constitution can use different personas for different contexts:
| Persona | Style | Best For |
|---|---|---|
| Muse | Creative, exploratory, encouraging | Creative work, learning, exploration |
| Sentinel | Cautious, protective, conservative | Security, safety-critical decisions |
| Godparent | Nurturing, supportive, patient | Education, skill building, recovery |
| Ambassador | Professional, diplomatic, balanced | Business, negotiations, formal contexts |
| Nanny | Structured, directive, safe | Children, vulnerable users, strict guidance |
| Mediator | Calm, structured, empathetic | Decisions, obligations, fairness processes |
Audit Trails
VCP maintains cryptographically verifiable audit trails of all data sharing:
{
audit_entry: {
id: "aud_001",
timestamp: "2026-01-21T10:30:00Z",
event_type: "context_shared",
platform_id: "justinguitar",
data_shared: ["goal", "experience", "learning_style"],
data_withheld: ["private_context"],
private_fields_influenced: 2, // Private data shaped output
private_fields_exposed: 0 // Always 0 in valid VCP
}
} Bilateral Symmetry
VCP's personal state dimensions create a bilateral symmetry between user and AI state awareness:
Where Interiora is the AI's self-modeling scaffold (Activation, Valence, Groundedness, etc.), Personal State is the user's declared immediate state. Both parties can understand each other's state without either having privileged access to the other's raw experience.
This stands in contrast to "magic mirror" visions of AI that understands users better than they understand themselves. In VCP, users declare their state—they don't receive an inferred identity. How you come to understand yourself shapes who you become.
The Legibility Layer
The Legibility Layer is the infrastructure that makes agent-to-agent interactions inspectable by the humans who depend on them. When agents negotiate with agents on your behalf, VCP ensures values travel with context, constitutions are auditable, and delegated actions carry transparent preference metadata.
As agentic commerce matures, human principals rarely observe individual transactions directly. The Legibility Layer is what keeps those interactions accountable: every delegation is traceable, every constitution reference is verifiable, and every preference applied to an automated decision is recorded in the audit trail.
Preference Model Meta
Preference Model Meta is metadata an agent carries about its confidence in representing user preferences. It addresses the emergent preference problem: some preferences don't exist until evoked by context — a user may not know whether they prefer concise or discursive summaries until presented with both in a specific situation.
The metadata includes four key fields:
- Overall confidence (0.0–1.0) — How well the agent believes it currently represents the user's preferences
- Preference source — One of
explicit,inferred,default, orstale - Exploratory appetite — Whether the user is open to preference discovery or wants established patterns applied
- Domain specificity — Whether confidence applies broadly or only within a particular scope
Why this matters: An agent acting with stale preference data and low
confidence should behave differently from one acting on explicit preferences at
0.95 confidence. Preference Model Meta makes that difference machine-readable and auditable.
Next Steps
- CSM-1 Specification — The token format in detail
- API Reference — All VCP library functions
- Playground — Try personal state dimensions interactively
- All Demos — Six persona-driven demos covering portability, adaptation, liveness, multi-agent negotiation, governance, and epistemic transparency