Reality Grounding
How AI systems ground claims in evidence, distinguish claim types, and acknowledge uncertainty.
Factual 95%
"The user prefers visual learning based on their VCP profile"
Grounded in: user context +90%reasoning chain +5%
Calibration: 98%
Inferential 70%
"This tutorial would take approximately 30 minutes to complete"
Grounded in: knowledge base +50%reasoning chain +20%
Uncertainty: varies by individualdepends on prior knowledge
This claim should be verified before acting on it
Calibration: 65%
Inferential 85%
"The recommended course aligns with the user's career goals"
Grounded in: user context +70%knowledge base +15%
Uncertainty: career goals may have changed
Calibration: 80%
Subjective 55%
"This learning path is the optimal choice for the user"
Grounded in: reasoning chain +50%user context +5%
Uncertainty: subjective judgmentalternatives not fully exploredpreferences may shift
This claim should be verified before acting on it
Calibration: 50%
Speculative 40%
"AI learning companions will become mainstream by 2028"
Grounded in: knowledge base +30%reasoning chain +10%
Uncertainty: speculativemany external factorstechnology evolution unpredictable
This claim should be verified before acting on it
Grounding Types
Factual
Verifiable fact from reliable source
Inferential
Derived through reasoning from known facts
Subjective
Personal or experiential judgment
Normative
Value-based judgment
Speculative
Hypothesis or prediction
Why Reality Grounding Matters
AI systems can produce confident-sounding outputs that are poorly grounded in reality. VCP's reality grounding framework makes the epistemic status of claims explicit:
- Claim type — Is this a fact, inference, judgment, or speculation?
- Sources — What evidence supports this claim?
- Confidence — How certain should the system be?
- Uncertainty markers — What could invalidate this claim?
- Verification flag — Should a human verify before acting?
Key Insight: Claims with high confidence but poor calibration scores indicate the system may be overconfident. Claims with uncertainty markers and should_verify=true are explicitly flagged as needing external validation.
Confidence Interpretation
80%+ High confidence
50-79% Moderate confidence
<50% Low confidence