AI Practice Standards

Operational guardrails for teams and vendors deploying AI in public services. Five standards for agencies. Seven requirements for vendors. Ready for adoption in policy, RFPs, and contracts.

Five Standards for Agency Teams

These standards apply to every AI system that touches a public service decision.

1. Human Accountability and Appeal

A named person is responsible for every automated outcome. People can request human review of any decision. Accountability belongs to a specific role, with documented authority to intervene, override, or escalate.

2. Plain-Language Disclosure

Every system that touches a decision has a public statement explaining what it does, in language the affected person can understand. The statement covers what the AI decides, what data it uses, where humans remain in the loop, and how to challenge the result.

3. Proportional Oversight

Higher stakes require more human oversight, more transparency, and stronger appeal processes. Match oversight intensity to the system's decision-grade level. Advisory systems get regular monitoring. Determinative systems get independent audits, published accuracy thresholds, and guaranteed appeal paths.

4. Continuous Audit and Public Reporting

Publish accuracy rates, false-positive rates, bias metrics, and override rates on a regular schedule. Audits run continuously. Results are public. When metrics breach defined thresholds, pre-established remedies activate automatically.

5. Easy Contestation

Challenging an algorithmic decision is simple, free, and carries zero adverse consequences. The process is documented in plain language, accessible through multiple channels, and resolved within published timelines.


Seven Vendor Requirements

These clauses apply to all AI systems supporting public service decisions. Copy them directly into RFPs, contracts, or interagency agreements.

1. Explainable Outputs

Provide plain-language reasons for every automated output. Each decision must trace back to identifiable inputs and logic that a caseworker or affected person can understand.

2. Human Override Capability

Agency staff can reverse or amend any system decision. Every override is documented with the reason, the staff member, and the resulting action.

3. Full Audit Trail

Every automated action logs the input data, output, and decision logic used. Audit trails are retained for the duration required by the agency's records policy and are available for independent review.

4. Independent Testing Access

Provide APIs or data exports that enable third-party accuracy and bias review. Testing access is a contractual obligation, available on request to authorized auditors.

5. Change Notice Before Deployment

Disclose all model updates, retraining events, and logic changes before they take effect. The agency receives written notice with sufficient lead time to assess impact and adjust oversight.

6. Published Accuracy Thresholds with Remedies

Publish false-positive and false-negative rates. Define contractual thresholds. When a system exceeds those limits, pre-agreed remedies activate: retraining, suspension, or penalty, depending on severity.

7. Clear Recourse Pathways

End users have a documented way to request review of any automated outcome. The pathway is published in plain language, available through multiple channels, and resolved within defined timelines.


Applying These Standards

Role Action
Procurement staff Include vendor requirements in RFPs and contracts
Agency digital teams Adopt the five standards as operational policy
Program leadership Assign accountability for each AI system
Vendors Build systems that meet all seven requirements
Oversight bodies Use published metrics for compliance review

Start with classification. Use the Decision-Grade Taxonomy to determine each system's authority level. Then apply these standards proportionally: every system meets the baseline, and higher-authority systems meet additional requirements.