Precise language is the foundation of high-fidelity governance. This glossary defines the core pillars of the Architecture of Proof—from Causal Drift to Control Tiers—ensuring that Product, Engineering, and Compliance teams share a single, deterministic vocabulary for AI safety.

The Architecture of Proof Glossary: Precise Definitions | AI Governance

A C D E L R S

AUC (Area Under the Curve)

Statistical Artifact Stage 1
/ˌeɪ.juːˈsiː/
A standard metric for model accuracy and separability. In the Architecture of Proof, AUC is considered a "Stage 1" metric—it provides a population-level aggregate that can mask individual-level logic failures.
  • Key Distinction: AUC measures performance (what happened); Causal Traces measure logic (why it happened).
  • Governance Action: Useful for lab validation but insufficient for Tier 4 Autonomy.

Calibration Curve

Model Integrity Stage 2
/ˌkæl.ɪˈbreɪ.ʃən kɜːv/
A visualization that maps predicted probabilities to actual frequencies. A well-calibrated model ensures that when it says there is an 80% chance of fraud, fraud occurs exactly 80% of the time in the long run.
  • Key Distinction: Accuracy is about being "right"; Calibration is about being "honest" with uncertainty.
  • Governance Action: A prerequisite for Evidence-Based Promotion into Tier 2+ autonomy.

Causal Drift

Model Integrity Stage 4
/ˈkɔː.zəl drɪft/
The "silent killer" of autonomous systems. Causal drift occurs when the logical relationship between an input and an output shifts, even if the data distributions (covariates) remain stable.
  • Key Distinction: Unlike Covariate Drift (data changes), Causal Drift is a change in the rules of the world.
  • Governance Action: Triggers an automated [Tier Downgrade](/control-tiers).
  • Read More: The AI Maturity Model

Causal Trace

Audit & Forensic Stage 4
/ˈkɔː.zəl treɪs/
A "black box flight recorder" for AI logic. Unlike a standard log file that shows what happened, a Causal Trace captures the step-by-step logical path from data ingestion to final output, including which specific business rules or model layers carried the most weight.
  • Key Distinction: Standard logs are observational; Causal Traces are forensic.
  • Governance Action: Provides the 4-minute root cause diagnosis required for [Stage 4 Maturity](/stage-4-maturity).

Control Tiers

System Autonomy Stages 1-4
/kənˈtrəʊl tɪərz/
A four-level hierarchy that dictates the "leash length" of an AI system, ranging from Tier 1 (Observe) to Tier 4 (Full Autonomy).
  • Tier 1 (Observe): AI monitors and logs only.
  • Tier 2 (Advise): AI suggests an action; human must approve.
  • Tier 3 (Act & Notify): AI acts autonomously but alerts human instantly.
  • Tier 4 (Full Autonomy): AI acts within guardrails; human reviews via periodic audit.
  • Governance Action: Requires Evidence-Based Promotion to move between tiers.
  • Read More: Control Tiers for AI-Enabled Processes

Drift (Data/Model)

Model Monitoring Stage 2
/drɪft/
The general term for the decay of model performance over time as production data diverges from training distributions.
  • Key Distinction: Standard Drift is a change in covariates (data patterns); Causal Drift is a change in the underlying logic or rules of the world.
  • Governance Action: Triggers a mandatory performance review and potential [Tier Downgrade](/control-tiers).

Evidence-Based Promotion

AI Governance Gate
/ˈev.ɪ.dəns beɪst prəˈməʊ.ʃən/

In the Architecture of Proof framework, Evidence-Based Promotion is the formal governance bridge between a "lab-safe" pilot and "high-stakes" production autonomy. Rather than promoting an AI system based on static accuracy scores or arbitrary project deadlines, this protocol requires a system to "earn" its way into higher Control Tiers by demonstrating a dense, replayable audit trail.

A system is ready for promotion only when its Proof Infrastructure—including its Benford Perimeter for data integrity and its Causal Trace for logic transparency—is robust enough to handle the inevitable 8% of cases where the probabilistic model will fail. By anchoring promotion in defensibility rather than just prediction, organizations transform AI from a black-box liability into a high-ROI, auditable business asset.

LIME

Post-Hoc Explainability Stage 1-2
/laɪm/
Local Interpretable Model-agnostic Explanations. A post-hoc technique that learns a "surrogate" linear model around a single prediction to provide local intuition.
  • Key Distinction: LIME is an approximation (probabilistic) of local behavior; Causal Traces are deterministic records of actual logic.
  • Governance Action: Useful for basic transparency but insufficient for high-fidelity Evidence-Based Promotion.

Replayability

Audit & Forensic Stage 2+
/ˌriː.pleɪ.əˈbɪl.ə.ti/
The technical capability to reconstruct the exact state of an AI system at the millisecond a specific decision was made. This includes the model version, precise input data (feature vector), external API responses, and the specific rules or weights that fired.
  • Key Distinction: Explainability is an intuition; Replayability is evidence.
  • Governance Action: Essential for the [AI Incident Golden Hour](/ai-incident-response-plan) and successful board-level audits.
  • Read More: AI Audit Trails: Replayable AI

SHAP

Post-Hoc Explainability Stage 2
/ʃæp/
Shapley Additive Explanations. A game-theoretic approach to explaining model outputs by assigning an "importance" value to each feature based on its contribution to the final prediction.
  • Key Distinction: SHAP is a post-hoc calculation that often introduces "forensic drag" (significant compute delay); Causal Traces are in-situ captures available instantly.
  • Governance Action: Better for aggregate auditing than individual incident Replayability.

Stable Reasoning

Model Integrity Stage 3-4
/ˈsteɪ.bəl ˈriː.zən.ɪŋ/
The convergence of probabilistic model patterns into deterministic-like consistency. Stable Reasoning occurs when an AI system produces non-divergent logical paths (Causal Traces) for similar inputs, eliminating the "stochastic variance" that typically characterizes unmanaged LLMs.
  • Key Distinction: Probabilities are fluid and drifting; Stable Reasoning is a frozen logic path promoted from consistent model behavior.
  • Governance Action: A prerequisite for Tier 4 Autonomy and the elimination of "Forensic Drag" in root cause analysis.

Download the Architecture of Proof Checklist

Ready to implement? Get the definitive checklist for building verifiable AI systems.

Zoomed image
Free Download

Downloading Resource

Enter your email to get instant access. No spam — only occasional updates from Architecture of Proof.

Success

Link Sent

Great! We've sent the download link to your email. Please check your inbox.