Decision-Grade AI Taxonomy

A three-level classification system for AI authority in public services. The level determines how much disclosure and oversight a system requires.

Why Classification Matters

Every AI system in public services exercises a specific level of authority. Some systems highlight information for a human decision-maker. Others execute routine decisions independently. A few issue binding determinations with legal consequences.

Each level carries different obligations. Stronger authority demands stronger disclosure, more rigorous oversight, and clearer appeal paths. Classification answers a direct question: how much power does this system hold over people?

The Three Levels

Level Name Description Examples
1 Advisory AI suggests or highlights information. A human decides everything. Risk scores, checklists, flagging
2 Assistive AI completes routine decisions when all criteria are met. A human handles complexity and exceptions. Auto-approvals, renewals
3 Determinative AI makes binding decisions. A human reviews only on appeal. High risk. Mandatory investigations, automated denials

Level 1: Advisory

The AI provides information, suggestions, or risk scores. A human decision-maker reviews this input alongside other evidence and makes the final call.

Authority: The system informs. It holds zero decision-making power.

Disclosure requirement: A public statement explaining what the system flags, what data it uses, and how staff incorporate its output into decisions.

Oversight requirement: Regular accuracy reviews, false-positive monitoring, and bias audits on a published schedule.

Level 2: Assistive

The AI executes straightforward decisions when all criteria are clearly met. Complex cases, edge cases, and exceptions go to a human reviewer.

Authority: The system decides routine cases within defined boundaries. A human decides everything else.

Disclosure requirement: A public statement covering what the system decides, the criteria for automatic action, how cases get routed to human review, and how to request manual review of any automated outcome.

Oversight requirement: Sampling of automated decisions for accuracy, tracking of override rates, demographic analysis of outcomes, and published performance metrics.

Level 3: Determinative

The AI issues binding decisions with legal or material consequences. Human review happens only when someone appeals.

Authority: The system holds direct power over outcomes. This is the highest-risk category.

Disclosure requirement: Full public documentation of decision logic, data sources, accuracy rates, and error remediation procedures. Every affected person receives plain-language notice of the AI's role and their right to appeal.

Oversight requirement: Continuous monitoring, independent audits, published accuracy thresholds with automatic remedies when exceeded, and accessible appeal processes with guaranteed human review.

Applying the Taxonomy

To classify a system, answer these questions:

  1. Who makes the final decision? If always a human: Level 1. If the system decides routine cases: Level 2. If the system decides and humans review only on appeal: Level 3.
  2. What happens when the system acts? If the output is a suggestion: Level 1. If the output triggers automatic action within defined criteria: Level 2. If the output carries binding legal effect: Level 3.
  3. How does someone get human review? If human review is the default: Level 1. If human review is available on request: Level 2. If human review requires a formal appeal: Level 3.

Disclosure Scales with Authority

The taxonomy establishes a direct relationship: the stronger the authority, the stronger the disclosure.

Requirement Advisory Assistive Determinative
Public disclosure statement Required Required Required
Individual notice to affected people Recommended Required Required
Plain-language explanation of AI role Required Required Required
Published accuracy metrics Recommended Required Required
Independent audit Recommended Recommended Required
Automatic remedies for threshold breaches Recommended Required
Guaranteed human appeal Required Required

Classify every AI system in your agency. Publish the classification. Match oversight to authority. When a system crosses from assistive to determinative, strengthen disclosure, add audit mechanisms, and ensure accessible appeal paths.