A formal specification for measuring and governing the structural integrity of AI reasoning systems. Independent of model, vendor, or provider.
Most AI evaluation frameworks measure correctness · whether the model gives the right answer.
RIS measures something more fundamental: whether the reasoning process itself is stable,
predictable, and coherent.
A model can give correct answers while its reasoning is structurally unstable.
That instability becomes a liability in production environments where reasoning must be
consistent, bounded, and auditable. RIS closes that gap with five measurable dimensions.
RIS levels classify systems based on measurable reasoning behavior · not model size, architecture, vendor, or training methodology. Level assignment requires both a composite score threshold and demonstrated compliance with mandatory controls.
Every model evaluated through the RIS pipeline appears here. Rankings are by composite score (CII). Submit your model to appear.
| Rank | Model | RIS Level | CII Score | Chain Stability | Drift | Variance | Source |
|---|---|---|---|---|---|---|---|
| #1 | alpha-test | RIS-2 | 0.7479 | 0.7500 | 0.0000 | 1.0000 | LCAC |
| #2 | alpha-test-model | RIS-2 | 0.7479 | 0.7500 | 0.0000 | 1.0000 | Portal |
| #3 | alpha-test-model | RIS-1 | 0.4755 | 0.0750 | 0.6047 | 0.8889 | Portal |
Showing 3 of 11 total runs · 3 unique models · Levels observed: RIS-1, RIS-2 · View full leaderboard →
Once certified, embed your RIS level badge in documentation, model cards, or README files. Available in rectangular and round formats. Both are SVG and resolution-independent.
All 5 level badges (RIS-0 through RIS-4) available at ris.atomlabs.app/badges/
RIS certification is available at no cost for general evaluation. Enterprise certification with full audit trail is available through ATOM.
RIS is one of three formal standards developed by Atom Labs that together form the foundation of governed machine reasoning inside ATOM OS.