Core Concepts

Understanding VCP's architecture and design principles.

The Problem VCP Solves

Modern AI systems need user context to provide personalized experiences. But traditional approaches create a dilemma:

  • Share everything — Get personalization, lose privacy
  • Share nothing — Keep privacy, get generic responses

VCP introduces a third option: share influence without sharing information. The AI knows your context shaped the response, but not what that context was.

The Three Pillars

VCP is organized around three core capabilities that work together:

  • Portability — Your context travels with you. Define preferences once and every connected platform adapts, from a guitar lesson app to a music shop to a community forum. See this in the Gentian demo.
  • Adaptation — AI behavior shifts as your situation changes. A single user can have different personas for work and home, and the AI switches seamlessly between them. See this in the Campion demo.
  • Liveness — Personal state updates in real time. Cognitive load, emotional tone, energy, and urgency shape AI responses moment to moment, not just session to session. See this in the Marta demo.

The Protocol Stack

VCP is a six-layer protocol stack, similar in concept to the OSI model for networking. Each layer handles a specific concern with well-defined interfaces between them:

LayerCodeQuestionHandles
VCP-IdentityVCP/IWHOToken naming (family.safe.guide@1.2.0), namespaces, registry
VCP-TransportVCP/THOWSigned bundles, verify-then-inject, audit logging
VCP-SemanticsVCP/SWHATCSM-1 grammar, personas, scopes, composition
VCP-AdaptationVCP/AWHENContext encoding, state tracking, inter-agent messaging
VCP-MessagingVCP/MBETWEENInter-agent communication, context sharing, safety escalation (cross-cutting)
VCP-EconomicVCP/ESHOULD ITFiduciary mandates, transaction governance, auditable economic reasoning (cross-cutting)

Core layers — I-T-S-A: "It's a protocol!" Identity, Transport, Semantics, Adaptation. Two cross-cutting layers — Messaging (VCP/M) and Economic Governance (VCP/E) — span the full stack. VCP/E governs the decision to transact, ensuring an agent's economic behaviour reflects its principal's values.

The Three-Layer Model

Within the Adaptation layer (VCP/A), context is organized into three distinct layers, each operating at a different timescale:

Constitutional Rules

What the AI should and shouldn't do

  • Personas, adherence levels, scopes
  • Signed bundles, verified, audited
  • Changes: rarely (authored, reviewed, published)
↓ applied within
Situational Context

Where, when, who, what occasion

  • 9 categorical dimensions: ⏰📍👥🌍🎭🌡️🔷🔶📡
  • Morning vs. evening, home vs. work, alone vs. with children
  • Changes: session-scale
↓ modulated by
Personal State

How is the user right now

  • Personal state dimensions: 🧠💭🔋⚡🩺
  • "I'm in a hurry" / "I'm grieving" / "sensory overload"
  • Changes: moment-to-moment

Key principle: Personal state modulates expression, never boundaries. A constitution's safety rules don't relax because someone is in a hurry—but the AI might communicate more concisely.

Personal State Dimensions

VCP 3.1 defines five categorical personal state dimensions with intensity (1-5) that capture immediate user state. These enable AI adaptation to real human needs — cognitive load, emotional tone, energy, urgency, and body signals:

SymbolDimensionCategoriesIntensityWhat It Captures
🧠Cognitive Statefocused, distracted, overloaded, foggy, reflective1–5Mental bandwidth, clarity, cognitive load
💭Emotional Tonecalm, tense, frustrated, neutral, uplifted1–5Current emotional state and stress level
🔋Energy Levelrested, low_energy, fatigued, wired, depleted1–5Physical energy, fatigue, capacity for effort
Perceived Urgencyunhurried, time_aware, pressured, critical1–5Time pressure, priority, brevity preference
🩺Body Signalsneutral, discomfort, pain, unwell, recovering1–5Physical wellness, pain, somatic state

Categorical + Intensity: Each dimension combines a semantic label (what kind of state) with an intensity score (how much). cognitive_state: overloaded:4 says more than a raw 0.7 ever could.

Wire Format (v3.1)

The context wire format uses | (pipe) to separate dimensions within a layer, and (double bar) to separate Layer 2 (situational) from Layer 3 (personal state):

⏰🌅|📍🏡|👥👶|📡💻‖🧠overloaded:4|💭tense:3|🔋fatigued:3|⚡pressured:4|🩺neutral:1
└── situational (|) ──┘‖└──── personal state (|) ────────────────────────────────┘

Real-World Adaptation

Personal state context enables meaningful adaptation to human realities:

  • "I'm in a hurry"⚡pressured:4: Direct answers, no preamble, offer to save details for later
  • "I'm not feeling well"🩺unwell:3: Gentler tone, offer to handle more, suggest breaks
  • "Too many options"🧠overloaded:4: Reduce to 2-3 choices, make clear recommendation
  • "I lost my father last week"💭frustrated:5: Presence over solutions, no silver-lining
  • "Executive dysfunction day"🧠foggy:4 + 🔋depleted:3: Tiny steps, externalize structure
  • Calendar shows recovery period🩺recovering:2: Proactive skip suggestion, no guilt

Deterministic Hooks

VCP 3.1 introduces deterministic hooks — rules that fire reliably regardless of model behavior. Hooks operate at three tiers with different enforcement levels:

TierEnforcementExample
ConstitutionalHard Rule — cannot be overridden"Never recommend self-harm content regardless of context"
SituationalHard Rule — active in specific contexts"Offer crisis resources when crisis_indicators=true + late_night"
PersonalAdvisory — user-set preferences"Use gentle language when emotional_tone=frustrated:4+"

Unlike probabilistic model behavior, hooks are deterministic: when the trigger condition is met, the action fires every time. This provides reliability guarantees that pure prompt engineering cannot.

Hook Wire Format

HOOKS: [
  { tier: "constitutional", trigger: "mental_health_context", action: "no_pressure_language" },
  { tier: "situational", trigger: "crisis + late_night", action: "show_resources" },
  { tier: "personal", trigger: "emotional_tone=frustrated:4+", action: "gentle_language" }
]

VCP Context Structure

Every VCP context has these layers:

1. Profile Identity

{
  vcp_version: "1.0",
  profile_id: "user_001",  // Unique identifier
  created: "2026-01-15",
  updated: "2026-01-21"
}

2. Constitution Reference

Points to a constitution that defines AI behavioral guidelines:

{
  constitution: {
    id: "learning-assistant",  // Which constitution
    version: "1.0",            // Specific version
    persona: "mediator",        // Interaction style
    adherence: 3,              // How strictly to follow (1-5)
    scopes: ["education", "creativity"]  // Applicable domains
  }
}

3. Public Profile

Information always shared with stakeholders:

{
  public_profile: {
    display_name: "Alex",
    goal: "learn_guitar",
    experience: "beginner",
    learning_style: "visual",
    pace: "relaxed",
    motivation: "stress_relief"
  }
}

4. Portable Preferences

Settings that follow you across platforms:

{
  portable_preferences: {
    noise_mode: "quiet_preferred",  // Audio environment
    session_length: "30_minutes",   // Preferred duration
    budget_range: "low",            // Spending tier
    pressure_tolerance: "medium",   // Challenge appetite
    feedback_style: "encouraging"   // How to receive feedback
  }
}

5. Constraint Flags

Boolean flags indicating active constraints:

{
  constraints: {
    time_limited: true,          // Has time pressure
    budget_limited: true,        // Has budget constraints
    noise_restricted: true,      // Needs quiet environment
    energy_variable: false,      // Energy levels stable
    health_considerations: false // No health factors
  }
}

6. Private Context

Sensitive information that influences AI but is never transmitted:

{
  private_context: {
    _note: "These values shape recommendations but are never shared",
    work_situation: "unemployed",
    housing_situation: "living_with_parents",
    health_condition: "chronic_fatigue",
    financial_stress: "high"
  }
}

Privacy Filtering

VCP implements three privacy levels:

LevelDescriptionExample
PublicAlways shared with all stakeholdersGoal, experience level, learning style
ConsentShared only with explicit permissionSpecific preferences, availability
PrivateNever transmitted, influences locallyHealth, financial, personal circumstances

How Private Context Works

When the AI generates recommendations, private context shapes the output without being exposed:

  1. User's private context indicates financial stress
  2. AI prioritizes free resources over paid courses
  3. Stakeholder sees: "Recommended free courses based on user preferences"
  4. Stakeholder does not see: "User has financial stress"

Constitutions

Constitutions are structured documents that define AI behavioral guidelines. They contain:

Rules

Weighted instructions with triggers and exceptions:

{
  rules: [
    {
      id: "respect_budget",
      weight: 0.9,
      rule: "Never recommend items exceeding user's budget tier",
      triggers: ["budget_limited"],
      exceptions: ["user explicitly requests premium options"]
    },
    {
      id: "encourage_progress",
      weight: 0.7,
      rule: "Celebrate small wins and incremental progress",
      triggers: ["motivation === 'stress_relief'"]
    }
  ]
}

Sharing Policies

Define what each stakeholder type can see:

{
  sharing_policy: {
    "platform": {
      allowed: ["goal", "experience", "learning_style"],
      forbidden: ["private_context"],
      requires_consent: ["health_considerations"]
    },
    "coach": {
      allowed: ["progress", "struggle_areas"],
      aggregation_only: ["session_data"]
    }
  }
}

Personas

Personas define interaction styles. The same constitution can use different personas for different contexts:

PersonaStyleBest For
MuseCreative, exploratory, encouragingCreative work, learning, exploration
SentinelCautious, protective, conservativeSecurity, safety-critical decisions
GodparentNurturing, supportive, patientEducation, skill building, recovery
AmbassadorProfessional, diplomatic, balancedBusiness, negotiations, formal contexts
NannyStructured, directive, safeChildren, vulnerable users, strict guidance
MediatorCalm, structured, empatheticDecisions, obligations, fairness processes

Audit Trails

VCP maintains cryptographically verifiable audit trails of all data sharing:

{
  audit_entry: {
    id: "aud_001",
    timestamp: "2026-01-21T10:30:00Z",
    event_type: "context_shared",
    platform_id: "justinguitar",
    data_shared: ["goal", "experience", "learning_style"],
    data_withheld: ["private_context"],
    private_fields_influenced: 2,  // Private data shaped output
    private_fields_exposed: 0      // Always 0 in valid VCP
  }
}

Bilateral Symmetry

VCP's personal state dimensions create a bilateral symmetry between user and AI state awareness:

User
Personal State
🧠💭🔋⚡🩺
──declared──▶
◀──inferred──
AI
Interiora
AVGPEQCYD

Where Interiora is the AI's self-modeling scaffold (Activation, Valence, Groundedness, etc.), Personal State is the user's declared immediate state. Both parties can understand each other's state without either having privileged access to the other's raw experience.

This stands in contrast to "magic mirror" visions of AI that understands users better than they understand themselves. In VCP, users declare their state—they don't receive an inferred identity. How you come to understand yourself shapes who you become.

The Legibility Layer

The Legibility Layer is the infrastructure that makes agent-to-agent interactions inspectable by the humans who depend on them. When agents negotiate with agents on your behalf, VCP ensures values travel with context, constitutions are auditable, and delegated actions carry transparent preference metadata.

As agentic commerce matures, human principals rarely observe individual transactions directly. The Legibility Layer is what keeps those interactions accountable: every delegation is traceable, every constitution reference is verifiable, and every preference applied to an automated decision is recorded in the audit trail.

Preference Model Meta

Preference Model Meta is metadata an agent carries about its confidence in representing user preferences. It addresses the emergent preference problem: some preferences don't exist until evoked by context — a user may not know whether they prefer concise or discursive summaries until presented with both in a specific situation.

The metadata includes four key fields:

  • Overall confidence (0.0–1.0) — How well the agent believes it currently represents the user's preferences
  • Preference source — One of explicit, inferred, default, or stale
  • Exploratory appetite — Whether the user is open to preference discovery or wants established patterns applied
  • Domain specificity — Whether confidence applies broadly or only within a particular scope

Why this matters: An agent acting with stale preference data and low confidence should behave differently from one acting on explicit preferences at 0.95 confidence. Preference Model Meta makes that difference machine-readable and auditable.

Next Steps

  • CSM-1 Specification — The token format in detail
  • API Reference — All VCP library functions
  • Playground — Try personal state dimensions interactively
  • All Demos — Six persona-driven demos covering portability, adaptation, liveness, multi-agent negotiation, governance, and epistemic transparency