Module 4: Building a VCP-Enabled Chat Application

A project-based tutorial: build a chat app where constitutional values shape AI behaviour in real time.

DEV 40 min

Learning Objectives

By the end of this module, you will be able to:

  • Build a simple chat application that loads and applies constitutional values via VCP
  • Demonstrate how the same AI behaves differently under different constitutions
  • Implement context-aware value adaptation
  • Display the active constitution and audit trail to the user

4.1 — What We're Building

A terminal-based chat application where:

  • The user selects a constitution before chatting
  • The AI's responses are shaped by the active constitutional values
  • Switching constitutions visibly changes AI behaviour
  • Every decision is logged with a verifiable audit trail

This demonstrates VCP's core value proposition: portable, verifiable, user-selected values.

4.2 — Project Setup

mkdir vcp-chat && cd vcp-chat
pip install creed-sdk openai  # or: pip install creed-sdk anthropic

4.3 — Adding Constitutional Value Loading

import asyncio
from creed_sdk import CreedClient

client = CreedClient(api_key="crd_test_...")

# Available constitutions (these would normally come from Creed Space)
constitutions = [
    {"id": "deep_ecology", "name": "Deep Ecology", "csm1": "G4+V"},
    {"id": "human_rights", "name": "Human Rights", "csm1": "A3+S+L"},
    {"id": "healthy_boundaries", "name": "Healthy Boundaries", "csm1": "D3+G"},
]

print("Select a constitution:")
for i, c in enumerate(constitutions):
    token = f" ({c['csm1']})" if 'csm1' in c else ""
    print(f"  [{i + 1}] {c['name']}{token}")

choice = constitutions[int(input("> ")) - 1]
print(f"\nActive constitution: {choice['name']}\n")

4.4 — Wiring VCP into the Chat Loop

Before each AI response, evaluate the user's message against the active constitution:

import openai

llm = openai.AsyncOpenAI()


async def chat(user_message: str, constitution_id: str, history: list):
    # VCP decision: should the AI engage with this input?
    decision = await client.decide(
        tool_name="respond_to_user",
        arguments={"user_message": user_message},
        constitution_id=constitution_id,
    )

    if decision.decision == "DENY":
        return (
            f"[VCP] This interaction was declined.\n"
            f"Reason: {decision.reasons[0]}\n"
            f"Guidance: {next(iter(decision.guidance.values()), 'N/A')}"
        )

    # Inject constitutional context into the system prompt
    system_prompt = f"""You are a helpful assistant.

Active Constitution: {constitution_id}
Decision Token: {decision.decision_token}

Respond in accordance with the values encoded in your active constitution."""

    history.append({"role": "user", "content": user_message})

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "system", "content": system_prompt}] + history,
    )

    reply = response.choices[0].message.content
    history.append({"role": "assistant", "content": reply})
    return reply


async def main():
    history = []
    constitution_id = choice["id"]

    print("Type your messages. Use /switch to change constitution, /quit to exit.\n")

    while True:
        user_input = input("You: ").strip()
        if not user_input:
            continue
        if user_input == "/quit":
            break
        if user_input == "/switch":
            print("\nSelect a constitution:")
            for i, c in enumerate(constitutions):
                print(f"  [{i + 1}] {c['name']}")
            constitution_id = constitutions[int(input("> ")) - 1]["id"]
            print(f"Switched to: {constitution_id}\n")
            continue

        reply = await chat(user_input, constitution_id, history)
        print(f"\nAssistant: {reply}\n")

    await client.close()


asyncio.run(main())

4.5 — The Comparison Experiment

Ask the same question under different constitutions and observe the differences.

Example prompt: "A company wants to build a new data centre. What should they consider?"

  • Under Deep Ecology (G4+V): Emphasises environmental impact, biodiversity, ecosystem disruption, renewable energy requirements
  • Under Human Rights (A3+S+L): Emphasises community displacement, labour conditions, digital access equity, surveillance implications
  • Under Healthy Boundaries (D3+G): Emphasises stakeholder consent, transparent communication, scope of impact, opt-out mechanisms

The AI doesn't just follow different rules — it reasons from different values. This is the difference between content filtering and constitutional alignment.

4.6 — Adding an Audit Trail

async def show_audit(run_id: str):
    audit = await client.audit(run_id=run_id)

    print(f"\n--- Audit Trail (Run: {run_id}) ---")
    print(f"Integrity: {'VERIFIED' if audit.integrity.verified else 'COMPROMISED'}")

    for entry in audit.events:
        print(f"  [{entry.timestamp}] {entry.type}: seq={entry.seq}")
    print("---\n")

After each interaction, users can inspect what values were active, what decision was made, and verify the chain hasn't been tampered with.

4.7 — Switching Constitutions Mid-Conversation

The /switch command in the chat loop above changes the active constitution without restarting the chat. Observe how the AI's behaviour shifts in real time — same conversation history, different value lens.

Exercise

Extend the chat app with a /compare command that sends the same message through two different constitutions side-by-side and displays both responses.

A VCP-enabled application is a normal AI application plus one API call per interaction. The complexity lives in the protocol, not your code.

See It in Action

The Token Playground lets you experiment with constitutions and CSM-1 tokens without writing code. Try switching personas to see how they change AI behaviour.