The Architecture of Proof Glossary: Precise Definitions | AI Governance
AUC (Area Under the Curve)
- Key Distinction: AUC measures performance (what happened); Causal Traces measure logic (why it happened).
- Governance Action: Useful for lab validation but insufficient for Tier 4 Autonomy.
Calibration Curve
- Key Distinction: Accuracy is about being "right"; Calibration is about being "honest" with uncertainty.
- Governance Action: A prerequisite for Evidence-Based Promotion into Tier 2+ autonomy.
Causal Drift
- Key Distinction: Unlike Covariate Drift (data changes), Causal Drift is a change in the rules of the world.
- Governance Action: Triggers an automated [Tier Downgrade](/control-tiers).
- Read More: The AI Maturity Model
Causal Trace
- Key Distinction: Standard logs are observational; Causal Traces are forensic.
- Governance Action: Provides the 4-minute root cause diagnosis required for [Stage 4 Maturity](/stage-4-maturity).
Control Tiers
- Tier 1 (Observe): AI monitors and logs only.
- Tier 2 (Advise): AI suggests an action; human must approve.
- Tier 3 (Act & Notify): AI acts autonomously but alerts human instantly.
- Tier 4 (Full Autonomy): AI acts within guardrails; human reviews via periodic audit.
- Governance Action: Requires Evidence-Based Promotion to move between tiers.
- Read More: Control Tiers for AI-Enabled Processes
Drift (Data/Model)
- Key Distinction: Standard Drift is a change in covariates (data patterns); Causal Drift is a change in the underlying logic or rules of the world.
- Governance Action: Triggers a mandatory performance review and potential [Tier Downgrade](/control-tiers).
Evidence-Based Promotion
In the Architecture of Proof framework, Evidence-Based Promotion is the formal governance bridge between a "lab-safe" pilot and "high-stakes" production autonomy. Rather than promoting an AI system based on static accuracy scores or arbitrary project deadlines, this protocol requires a system to "earn" its way into higher Control Tiers by demonstrating a dense, replayable audit trail.
A system is ready for promotion only when its Proof Infrastructure—including its Benford Perimeter for data integrity and its Causal Trace for logic transparency—is robust enough to handle the inevitable 8% of cases where the probabilistic model will fail. By anchoring promotion in defensibility rather than just prediction, organizations transform AI from a black-box liability into a high-ROI, auditable business asset.
LIME
- Key Distinction: LIME is an approximation (probabilistic) of local behavior; Causal Traces are deterministic records of actual logic.
- Governance Action: Useful for basic transparency but insufficient for high-fidelity Evidence-Based Promotion.
Replayability
- Key Distinction: Explainability is an intuition; Replayability is evidence.
- Governance Action: Essential for the [AI Incident Golden Hour](/ai-incident-response-plan) and successful board-level audits.
- Read More: AI Audit Trails: Replayable AI
SHAP
- Key Distinction: SHAP is a post-hoc calculation that often introduces "forensic drag" (significant compute delay); Causal Traces are in-situ captures available instantly.
- Governance Action: Better for aggregate auditing than individual incident Replayability.
Stable Reasoning
- Key Distinction: Probabilities are fluid and drifting; Stable Reasoning is a frozen logic path promoted from consistent model behavior.
- Governance Action: A prerequisite for Tier 4 Autonomy and the elimination of "Forensic Drag" in root cause analysis.
Download the Architecture of Proof Checklist
Ready to implement? Get the definitive checklist for building verifiable AI systems.