Belief Calibration
How AI systems track epistemic states and calibrate confidence against external feedback.
Knowledge States
Click a belief to see its epistemic context
Select a belief to view its epistemic context
Why Belief Calibration Matters
Well-calibrated AI systems know what they know and what they don't. VCP tracks:
- Confidence levels — How certain the system is about each claim
- Evidence sources — Where the belief comes from (training, inference, external lookup)
- Calibration history — How well past confidence matched reality
- Uncertainty type — Whether the uncertainty is resolvable or fundamental
The ? marker indicates introspective uncertainty—claims about internal states
that cannot be fully verified from inside the system.