Research lab

AI is changing how institutions make decisions. We work on keeping those decisions legible, governable, and reversible.

We build tools, standards, and investigative frameworks for the decision layer of modern institutions — where policy, money, and automation meet.

See what we work on

What we work on

Decision architecture

Standards and models for how automated systems make decisions inside institutions.

When a system approves a permit, denies a benefit, or allocates funding — what governs that action, who can review it, and how does it get appealed?

We write the specs for decision-making that remains accountable even when machines are involved.

Transparency infrastructure

Tools and patterns that make institutional behavior visible to the people inside and outside the system.

  • Disclosure and provenance tools
  • Procurement benchmarks
  • Audit-ready decision logs
  • Frameworks for tracing how money and authority move

If a system affects your life, you should be able to see how it works — and if it spends public money, where it went.

Legibility is a precondition for trust.

Protection frameworks

Structural safeguards that prevent systems built for public good from being repurposed for extraction, abuse, or opacity.

  • Licensing and governance models
  • Anti-capture design patterns
  • Audit and oversight structures
  • Research into failure modes, evasion, and institutional blind spots

Open systems need protection. So do the people operating inside them.

Published guides

Tools

Field work

We study where public systems break down — and sometimes step in to make those problems legible enough to fix.

Current areas:

  • AI and administrative complexity
  • Tax and funding opacity
  • Decision routing in automated systems
  • Procurement and vendor risk
  • Institutional fraud and evasion patterns
  • Interoperability across jurisdictions

Why this work exists

Decisions in public systems are happening faster than institutions can understand them. The risk isn’t bad outcomes — it’s losing the ability to see where decisions come from at all.

State Capacity AI builds the trust plumbing for that era — so institutions stay governable, and the public can still contest what affects them.

About the lab →