Multi-Agent Patterns

Implementing VCP in multi-agent coordination scenarios.

The Multi-Agent Challenge

As AI systems become more capable, they increasingly need to coordinate with each other— multiple agents working together on complex tasks, negotiating resources, or representing different stakeholders. VCP provides the coordination layer for these interactions.

Key challenges VCP addresses:

  • How do agents share context without oversharing?
  • How do agents with different constitutions negotiate?
  • How is trust established between agents?
  • How do users maintain control over multi-agent systems?

Agent Identity

Each agent in a VCP system has an identity profile:

{
  "agent_id": "agent_learning_tutor_001",
  "agent_type": "tutor",
  "provider": "learning-platform.com",

  "constitution": {
    "id": "creed.space/educational-assistant",
    "version": "2.1.0",
    "persona": "godparent"
  },

  "capabilities": [
    "explanation",
    "assessment",
    "encouragement"
  ],

  "trust_anchors": [
    "creed.space",
    "learning-platform.com"
  ],

  "interiora_enabled": true
}

Context Sharing Patterns

Pattern 1: Mediated Sharing

A user's VCP context is shared through a mediator that enforces privacy policies:

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   User      │     │  Mediator   │     │  Agent A    │
│   Context   │────▶│  (VCP Hub)  │────▶│  (Tutor)    │
└─────────────┘     │             │     └─────────────┘
                    │  Enforces   │
                    │  privacy    │     ┌─────────────┐
                    │  policies   │────▶│  Agent B    │
                    │             │     │  (Assessor) │
                    └─────────────┘     └─────────────┘

The mediator ensures each agent only receives the context they're authorized to see.

Pattern 2: Direct Negotiation

Agents negotiate directly, sharing only what's needed for coordination:

// Agent A proposes
{
  "proposal_id": "prop_001",
  "from": "agent_tutor",
  "to": "agent_scheduler",
  "action": "schedule_session",
  "constraints": {
    "duration": "30_minutes",
    "user_preference": "quiet_environment",
    "priority": "medium"
  },
  "context_shared": ["duration", "priority"],
  "context_withheld": ["user_health_state"]
}

Pattern 3: Broadcast with Filters

Context is broadcast to all agents, but each applies their own privacy filter:

const context = user.vcpContext;

// Each agent filters according to their authorization level
const tutorView = privacyFilter.apply(context, {
  recipient: 'agent_tutor',
  level: 'educational'
});

const analyticsView = privacyFilter.apply(context, {
  recipient: 'agent_analytics',
  level: 'aggregated_only'
});

Constitutional Negotiation

When agents with different constitutions interact, they need to negotiate compatible behavior:

Compatibility Check

function checkConstitutionalCompatibility(agentA, agentB) {
  const conflictingRules = [];

  for (const ruleA of agentA.constitution.rules) {
    for (const ruleB of agentB.constitution.rules) {
      if (rulesConflict(ruleA, ruleB)) {
        conflictingRules.push({
          ruleA: ruleA.id,
          ruleB: ruleB.id,
          conflict_type: classifyConflict(ruleA, ruleB)
        });
      }
    }
  }

  return {
    compatible: conflictingRules.length === 0,
    conflicts: conflictingRules,
    resolution_required: conflictingRules.some(c => c.conflict_type === 'hard')
  };
}

Resolution Strategies

StrategyWhen to UseExample
HierarchyClear authority relationshipSafety rules override efficiency rules
VotingDemocratic multi-agent systemsMajority of agents agree on action
AuctionResource allocationAgents bid for priority with utility scores
MediationComplex disputesNeutral agent proposes compromise
EscalationUnresolvable conflictsHuman user makes final decision

Trust Establishment

VCP uses a web-of-trust model for agent authentication:

Trust Chain

{
  "trust_chain": [
    {
      "subject": "agent_tutor_001",
      "issuer": "learning-platform.com",
      "level": "verified",
      "issued_at": "2026-01-01T00:00:00Z",
      "expires_at": "2027-01-01T00:00:00Z"
    },
    {
      "subject": "learning-platform.com",
      "issuer": "creed.space",
      "level": "certified_provider",
      "issued_at": "2025-06-01T00:00:00Z"
    }
  ]
}

Trust Levels

  • Unknown — No trust established, minimal context sharing
  • Verified — Identity confirmed, basic context sharing
  • Trusted — Good track record, expanded context sharing
  • Certified — Audited by trusted authority, full sharing within policy

Coordination Protocols

Task Delegation

// Primary agent delegates subtask
const delegation = {
  task_id: "task_001",
  from: "agent_coordinator",
  to: "agent_specialist",

  task: {
    type: "research",
    topic: "learning_resources",
    constraints: user.vcpContext.constraints
  },

  context_grant: {
    fields: ["goal", "experience", "learning_style"],
    duration: "task_completion",
    audit_required: true
  },

  callback: {
    type: "result",
    schema: "research_findings_v1"
  }
};

Result Aggregation

When multiple agents contribute to a response, their outputs are aggregated with provenance tracking:

{
  "aggregated_response": {
    "content": "Based on your learning style...",

    "contributions": [
      {
        "agent": "agent_content",
        "portion": "content_recommendations",
        "confidence": 0.85
      },
      {
        "agent": "agent_scheduling",
        "portion": "time_suggestions",
        "confidence": 0.92
      }
    ],

    "constitution_applied": "composite",
    "privacy_audit": {
      "user_fields_accessed": ["goal", "learning_style", "time_constraints"],
      "private_fields_influenced": 2,
      "private_fields_exposed": 0
    }
  }
}

Human-in-the-Loop

Multi-agent systems should maintain human oversight. VCP supports several patterns:

Approval Gates

{
  "approval_required": {
    "threshold": "high_impact",
    "actions": [
      "schedule_commitment",
      "share_with_third_party",
      "modify_preferences"
    ],
    "timeout": "5_minutes",
    "default_action": "reject"
  }
}

Transparency Dashboard

Users can see what agents are doing and intervene:

  • Which agents are active
  • What context each agent has accessed
  • What actions each agent is considering
  • Ability to pause, revoke access, or redirect

Example: Learning Ecosystem

// User's learning session with multiple coordinating agents
const session = {
  user: userContext,

  agents: [
    {
      id: "agent_tutor",
      role: "primary_instruction",
      context_access: ["goal", "experience", "progress"]
    },
    {
      id: "agent_assessor",
      role: "evaluate_understanding",
      context_access: ["responses", "time_patterns"]
    },
    {
      id: "agent_scheduler",
      role: "optimize_timing",
      context_access: ["constraints", "energy_patterns"]
    },
    {
      id: "agent_wellbeing",
      role: "monitor_fatigue",
      context_access: ["session_duration", "engagement_signals"]
    }
  ],

  coordination: {
    protocol: "mediated",
    mediator: "vcp_hub",
    conflict_resolution: "hierarchy_then_escalate"
  }
};

Security Considerations

  • Agent impersonation — Verify agent identity before sharing context
  • Context leakage — Audit all context sharing between agents
  • Collusion — Monitor for agents combining data inappropriately
  • Cascade failures — Isolate agent failures to prevent system-wide issues

Next Steps