AI Enablement Statements
A public-facing disclosure template that makes algorithmic authority visible in government services. One page. Ten fields. Publish it where people use your service.
The Disclosure Gap
The algorithm sees you. You should see the algorithm.
When banks use algorithms to deny credit, federal law requires disclosure. The Fair Credit Reporting Act mandates specific reasons for denial, a copy of the report used, contact information to dispute errors, and your rights in plain language. This applies even when AI makes the decision.
When government uses algorithms to make decisions about people's lives, no equivalent requirement exists.
AI is already operating across public services with no disclosure that AI was involved, no explanation of what data was used, no clear path to challenge errors, and no accountability when the system is wrong.
Where AI Operates Without Disclosure
Public Benefits
Medicaid eligibility, SNAP approvals, unemployment fraud detection, housing assistance screening.
Public Safety
Predictive policing, bail and parole risk scores, 911 call prioritization, emergency response routing.
Licensing, Education, Healthcare
Business license approvals, building permits, inspection prioritization, school discipline, special education placement, hospital bed allocation, prior authorization.
Why This Matters Now
Compliance pressure is forcing rapid AI deployment. HR1 requires states to cut Medicaid error rates or face federal funding clawbacks. States are turning to AI to meet impossible timelines.
Agencies deploy untested vendor systems. Caseworkers defend decisions they do not understand. People lose benefits due to unidentified errors. Private-sector AI tools are imported into public services with no public accountability infrastructure.
The AI Enablement Statement fills this gap: a lightweight standard agencies can implement under pressure. A template to fill out, publish, and make authority visible.
What Is an Enablement Statement?
A one-page public notice—like a nutrition label for algorithmic authority. It tells people what the AI does, where humans remain in the loop, how to challenge a decision, and how the system is monitored for errors.
It gives people a clear view of how decisions are made. It protects workers by stating boundaries. It forces teams to plan oversight before launch.
Template Fields
Every AI Enablement Statement includes these ten sections.
| Field | Description |
|---|---|
| Service | Name of the service |
| Agency | Responsible agency or department |
| Purpose and context | Why this system exists now |
| AI role | What the system does and when |
| Human role | Who decides and why |
| Contestability | How a person asks for review |
| Oversight | Audits, cadence, triggers |
| Limitations | What the system cannot do |
| Data sources | What feeds the system |
| Contact | Where to report errors |
How to Write One
- Classify your system. Determine the decision-grade level: Advisory (AI suggests, human decides), Assistive (AI handles routine cases, human handles exceptions), or Determinative (AI makes binding decisions, human reviews on appeal).
- Draft the statement using the ten template fields above.
- Be specific. Use real numbers, real contact info, real processes.
- Be honest. If accuracy is unknown, say so. If humans rubber-stamp outputs, do not claim meaningful review.
- Publish it where people use your service—on your
/aior/transparencypage.
Choose the worked example closest to your use case. Keep the section structure. Change the content to match your system.
The stronger the authority, the stronger the disclosure. Systems that cross from Assistive to Determinative require additional public disclosure, audit mechanisms, and human appeal paths.
Who This Is For
| Role | How to Use |
|---|---|
| State/Local CIOs and CDOs | Adopt as lightweight policy standard |
| Agency Digital Teams | Use templates for transparency |
| Procurement Staff | Require Enablement Statements in contracts |
| Program Leadership | Classify systems and set disclosure requirements |
| Advocates and Researchers | Evaluate government AI; push for adoption |
| Vendors | Complete Enablement Statements; demonstrate compliance |
About This Guide
These disclosure standards are designed for real-world deployment under real-world constraints. They are released into the public domain (CC0) so any agency, vendor, or advocate can use, adapt, and build on them.
Every real-world implementation makes these templates more useful. Every improvement makes them more actionable.