Trust is not a policy statement. It is an architectural property.
Most AI governance platforms analyze your prompts to make governance decisions. That means they store your content, your customer data, your proprietary queries.
ATOM takes a different approach. Governance happens at the structural and behavioral level: reasoning integrity, semantic coherence, drift detection, policy evaluation. These signals do not require storing your content.
The result: your prompts pass through ATOM’s governance engine and are immediately discarded. Only the verdict is persisted.
| Column | What it contains |
|---|---|
| event_id | Unique identifier (UUID) |
| tenant_id | Your account identifier |
| provider | Which AI provider was called |
| model | Which model was called |
| decision | allow, block, or warn |
| reason | Why that decision was made |
| ris_level | RIS-0 through RIS-4 integrity score |
| cii | Cognitive Integrity Index (0.0–1.0) |
| drift_score | Provider behavioral drift |
| timestamp | When the decision was made |
input column. No prompt column. No content column. No output column. No response column.Every governance decision is recorded in a SHA-256 hash-chained ledger.
Each record’s hash is computed as:
This creates a mathematical chain. Modifying any historical record breaks every hash that follows it. The entire chain can be verified at any time by anyone with access to the ledger file.
This is the same tamper-evidence mechanism used in certificate transparency logs and blockchain systems. It does not require trusting the platform operator. The math is the proof.
This is enforced at three independent layers:
Every API key is cryptographically bound to one tenant. The gateway rejects any request where the key’s tenant does not match the request’s tenant_id. This check happens before any processing.
Every database query includes a tenant_id bound parameter. No query returns data across tenant boundaries. This is enforced in code, not by convention.
Every governance event, usage record, and rate-limit counter is namespaced by tenant_id in both PostgreSQL and Redis. Namespaces do not overlap.
We document this explicitly because transparency about operator access is a requirement for enterprise trust, not an optional disclosure.
When you register your own AI provider keys:
Your keys are encrypted using AES-128-CBC with HMAC-SHA256 before storage. The encryption key lives only in the platform environment, not in the database.
This means: if someone obtained a copy of the database, they would find only ciphertext they cannot decrypt without the environment secret. The database alone is not sufficient to recover your provider keys.
Your full key is never returned via API. Only the last 4 characters are shown as a confirmation that the correct key is registered.
These are not promises. They are testable properties.
Verify: Inspect the governance_events table schema. No content columns exist. The schema is the proof.
Verify: Run lcac_ledger_verify.py against the ledger file. Any modification to any record produces a hash mismatch and exits with code 1.
Verify: Attempt to access another tenant’s data using your API key. The gateway returns HTTP 403 before any processing occurs.
Verify: Inspect the lcac_tenant_provider_keys table. Only ciphertext is stored. No plaintext values exist in any row.
When you register a webhook endpoint, ATOM delivers governance events to your systems via HMAC-SHA256 signed payloads. You can independently verify every delivery.
Every webhook payload is signed with a per-tenant secret. The signature is included in the X-ATOM-Signature header. Compute HMAC-SHA256(secret, payload_body) and compare to verify authenticity.
Webhook payloads contain governance metadata only · decision, RIS level, CII score, timestamp, and tenant ID. Your prompt content and model outputs are never included. The payload structure mirrors the audit record: verdict without content.
Webhook delivery is asynchronous. A failed delivery does not affect the governance decision or the governed response returned to your application. Delivery is best-effort with a 5-second timeout per attempt.
Send questions to our security team or read the full technical documentation.
The AI Governance Advisor embedded in the ATOM console routes every query through the same pre-execution governance pipeline as your own AI calls. It cannot be used to bypass governance · it is subject to the same enforcement rules.
Prompt injection protection: 25 regex patterns block injection attempts before they reach Claude (sub-millisecond, zero API cost for blocked queries). Blocked patterns include system prompt extraction, role-switching (DAN, developer mode), cross-tenant data requests, and delimiter injection.
Tenant isolation: Governance context is fetched server-side on every query · client input cannot influence what data the advisor sees. No other tenant's data appears in any advisor context. Cross-tenant queries are blocked at the pattern layer.
Audit trail: Every advisor query is logged with tenant ID, block status, matched injection pattern (if blocked), and latency. Response filtering redacts API key patterns from Claude's output before delivery.