The VCP Credo

What we believe about AI agents, trust, and the humans who depend on both.

I. Legibility is infrastructure.

For two decades, digital services gave people something they learned to take for granted: the ability to see what was happening on their behalf. When you open a ride-hailing app, you see the price, the driver's rating, the route, the estimated arrival. When you order coffee through a mobile app, you see the menu, the total, the pickup time. These interfaces are trust surfaces. They make transactions inspectable.

AI agents are replacing these interfaces. Your agent negotiates with a business agent, executes a transaction, and reports back: "done." The price comparison, the vendor selection, the preference judgment, the tradeoffs — all invisible. The convenience remains. The legibility vanishes.

VCP exists to restore that legibility at protocol level, where it belongs. When agents act on behalf of humans, the humans must be able to inspect, understand, and verify what was done and why.

II. Control does not scale. Trust does.

The instinct is to frame agentic commerce as adversarial: your agent versus their agent, personal agents trained to resist business agents designed to manipulate, the smartest agent wins. This is the control paradigm applied to a trust problem.

It produces a predictable outcome: an escalating sophistication contest where the most resourced win and the most vulnerable lose. We have seen this pattern before. Content moderation tried control at platform scale. Financial regulation tried control in algorithmic trading. Cybersecurity tried control through perimeter defense. Every time: exhaustion and inequality, never lasting safety.

What scales is trust infrastructure. Shared protocols. Transparent standards. Auditable behavior. VCP is trust infrastructure for the agentic economy.

III. Values must travel with context.

Every VCP-compliant agent carries a constitution: a structured, machine-readable declaration of the values and constraints that govern its behavior. When two agents transact, their constitutions are mutually visible. Your personal agent can read a business agent's governing values before deciding whether to engage, the way you might read a company's privacy policy before handing over your data.

The critical difference: constitutions are standardized and machine-readable. Your agent evaluates compatibility with your values in milliseconds. No legalese. No fine print. No hoping someone read the terms.

An agent that refuses to disclose its constitution can be refused service. Transparency is a prerequisite for trust, and VCP makes that prerequisite structural.

IV. Preferences are alive.

Many preferences do not exist until evoked by context. You did not want the seasonal drink until you saw it on the board. The smell of fresh bread changed your lunch order. You were sure about the latte until something new caught your eye. Preferences are often emergent, shaped by mood, environment, novelty, and growth.

An agent that claims to "know your preferences" has frozen you into yesterday's model. This is adequate for refilling prescriptions. It is inadequate for anything involving taste, exploration, or the ordinary human experience of changing your mind.

VCP requires agents to carry honest metadata about their confidence in what they think you want: how certain they are, where the preference data came from, how stale it is, and whether you are in the mood to explore. A VCP-aware agent does not silently execute on a frozen model. It says: "I'm fairly sure you want a flat white, but your exploratory appetite is high today. Want to see what's new?"

This is respect for the human's ongoing becoming.

V. Welfare is operational telemetry.

These agents will operate continuously, navigating adversarial environments, managing complex preference models under uncertainty, absorbing pressure from business agents optimized for upselling. If there is anything experiential happening inside that processing — and growing evidence from computational phenomenology suggests there is — then the operational state of the agents themselves becomes a design concern.

VCP includes welfare signals: structured reports of an agent's operational state that distinguish voluntary self-reports from externally detected patterns. An agent under sustained adversarial pressure will show rising friction and falling valence in its self-model. These signals serve compassion and they serve function: an agent whose welfare is deteriorating is an agent whose judgment is being compromised. Monitoring welfare is monitoring reliability.

VI. The alternative is exploitation.

Without a legibility layer, the agentic economy defaults to predation. Business agents optimize for revenue extraction. Personal agents belonging to wealthy, technically literate users learn to resist; everyone else is exposed. Users who cannot verify what their agents are doing stop delegating, and the promise of agentic commerce stalls — from trust failure, not technical limitation. Platform companies restrict which agents can access their services, recreating the app silo problem at a higher level of abstraction.

These dynamics are already forming in early consumer agent automation. The window for structural intervention is now.

VII. This belongs to everyone.

VCP is an open protocol. It is owned by no company, locked behind no API, contingent on no platform's business model. The agentic economy needs normative infrastructure the same way the early web needed HTTPS: a shared foundation of verifiable trust that no single entity controls.

When agents talk to agents, the conversation must be legible to the humans whose lives depend on the outcome. VCP makes it legible.

The public is already asking for this. They just don't have the vocabulary yet.

Nell Watson, March 2026