Explore the four tiers of AI autonomy, the proof required for tier advancement, and the necessity of automated tier downgrade paths.

Control Tiers for AI‑enabled Processes: Controlling When AI Acts, Asks, or Stops

This post is part of the Autonomy and Escalation pillar.

This article explains how to design AI control levels (tiers) so you can decide when AI acts, asks, or stops, matching autonomy to risk.

AI operates at different levels of autonomy: sometimes it suggests, sometimes it acts within guardrails, sometimes it steps back and hands full control to a human.

AI Control Tiers Overview: A spectrum diagram showing the four levels of autonomy from Tier 0 (Observe) to Tier 3 (Human Only)

If you don’t design these levels on purpose, there is a high chance you would end up with either: - AI that does too much with too little oversight. - AI that does almost nothing, because every decision is routed to a human who can’t keep up.

In Composite Intelligence, we talked about systems where rules, models, and humans share the work. In Composite Accountability, we made each part prove it did its job. The next logical step is to decide: how far are we willing to let the system go on its own?

That’s what Control Tiers are: predefined levels of autonomy, each with: - Clear permissions (what the system may do). - Clear preconditions (what must be true to operate at this tier). - Clear escape routes (when to escalate or shut itself off).

What are the four levels of AI autonomy?

The 4 Control Tiers

Tier Name Permissions Example
0 Observe Analysis & Suggestions Draft email, suggest queue
1 Act in the Sandbox Low-risk, reversible actions Internal tagging, non-critical routing
2 Act with Circuit Breakers High-leverage, bounded actions Auto-approve low-risk, auto-block abuse
3 Human Only No automated actions permitted Clinical diagnosis, large finance

You should be able to say: "This workflow runs at Tier N, and here’s why."


How Tiers Connect to Rules, Models, and Humans

Each Control Tier corresponds to a different mix of your composite actors.

Tier 0 – Observe

⚡ TIER 0 // OBSERVE PROTOCOLS
Customer Support Reply
Models draft; rules strip unsafe language; humans edit/send.
Sales Next-Best-Action
Models rank leads; humans decide whether to follow.
Radiology Report Draft
Models summarize imaging; radiologists review/sign.

Tier 1 – Low-risk Autonomy

Tier 1 Autonomy Examples: Visualizing ticket triage, UI personalization, and micro-discount automation workflows

⚡ TIER 1 // SANDBOX PROTOCOLS
Email Routing / Triage
Models auto-assign; rules prevent sensitive routing.
UI Personalization
Models reorder widgets; rules ensure billing info stays visible.
Micro-Discounts
Models apply promos; rules cap total discount.

Tier 2 – High-leverage Autonomy

⚡ TIER 2 // CIRCUIT BREAKER PROTOCOLS
Credit Approvals
Auto-approve small lines; break on default volume.
Fraud Blocking
Auto-block high fraud; downgrade if FP rate spikes.
Resource Allocation
Auto-scale services; revert to manual on error spike.
🧠 Business Case

The Autonomy Governor: Use "Evidence-Based Promotion" to earn the right to move from Tier 0 (Observe) to Tier 2 (Act). Scale your impact without scaling your risk.

Tier 3 – Human Only

Tier 3 Governance Examples: Critical human-in-the-loop domains including medical treatments and industrial safety


Advancing Tiers Should Be Governed by Proof

Tier upgrades should be earned with evidence.

AI Tier Transition Evidence: A checklist of proof requirements for advancing from Tier 0 to Tier 1 and Tier 2

To move from Tier 0 → Tier 1, you might require:

✓ EVIDENCE FOR TIER 0 ➜ 1
Stable model performance on held-out data.
No critical rule violations in a pilot.
Override patterns are rare and explainable.

To move from Tier 1 → Tier 2, the bar goes up:

✓ EVIDENCE FOR TIER 1 ➜ 2
Production calibration and low error rates.
Tested circuit breakers with documented behavior.
Signed-off risk appetite and incident playbooks.

The principle is simple: no evidence, no tier upgrade. Each level up is a governance event, not a configuration tweak.


How do you design for automatic AI failure?

The other side of Control Tiers is how you move down when things go wrong. This is how you design Escalation Protocols and controlled degradation paths:

⚡ AUTOMATED DOWNGRADE PROTOCOLS
Performance Drift
Tier 2 ➜ Tier 0
Rule Violation Spike
Tier 1/2 ➜ Tier 3
Human Backlog Overflow
Restrict Auto-Escalations

The system should self-govern by saying:

“Given the current data and metrics, I no longer meet the bar for this tier. I’m downgrading myself.”

AI Self-Governance Loop: An architectural diagram showing the logic for automatic control tier downgrades based on performance triggers

Example: Control Tiers in an Operations Workflow

Take a logistics control tower.

Logistics Control Tower Workflow: A real-world example of AI control tiers applied to shipment rerouting

Same underlying system, different tiers of control depending on risk, context, and the current health of the components.

Designing Control Tiers in practice

For each major AI-influenced workflow, ask: 1. What is the default Control Tier today? 2. What would Tier 0, 1, 2, 3 actually look like here? 3. What proof would we need to move up a tier? 4. What conditions should trigger an automatic downgrade? 5. Who owns the tier decisions?

Control Tiers turn “keep a human in the loop” from a slogan into a design: they define when AI acts, when it asks, and when it stops itself.


Download the Architecture of Proof Checklist

Ready to implement? Get the definitive checklist for building verifiable AI systems.

Zoomed image
Free Download

Downloading Resource

Enter your email to get instant access. No spam — only occasional updates from Architecture of Proof.

Success

Link Sent

Great! We've sent the download link to your email. Please check your inbox.