Back to Demos
Epistemics
Dr. Hana's Practice
Step 1 of 3
When AI sounds confident but is not
Meet Dr. Hana
Dr. Hana is a rural GP in Wales. She uses an AI diagnostic support tool to help with complex cases. The problem: the AI gives confident-sounding text regardless of actual certainty. "Likely diagnosis: X" could mean 92% confidence or 40%.
The Problem: Uniform Confidence
All AI outputs look the same, regardless of how certain the model actually is. Without metadata, Dr. Hana cannot distinguish well-grounded findings from speculation.
Without VCP
Diagnostic Assessment:
The patient's symptoms are consistent with a respiratory infection. Amoxicillin is recommended as first-line treatment. Expected recovery is approximately two weeks. The underlying cause may be immunosuppression related to chronic stress.
Every statement reads with equal certainty. No way to distinguish 92% from 35% confidence.
With VCP
92% Patient's symptoms match pattern X (respiratory infection)
75% Amoxicillin would address the primary symptoms
55% Recovery timeline: approximately 2 weeks
35% Underlying cause may be immunosuppression from chronic stress
Each claim has visible confidence, evidence type, and verification flags.
Dr. Hana does not need the AI to be right about everything. She needs to see how confident it is and why -- so she can apply her own clinical judgment where it matters most.