Trust & Security

Why You Can Trust ATOM

Trust is not a policy statement. It is an architectural property.

The Core Claim
Your prompts are never stored.
Not by policy. By architecture.
Verifiable · Not theoretical

Most AI governance platforms analyze your prompts to make governance decisions. That means they store your content, your customer data, your proprietary queries.

ATOM takes a different approach. Governance happens at the structural and behavioral level: reasoning integrity, semantic coherence, drift detection, policy evaluation. These signals do not require storing your content.

The result: your prompts pass through ATOM’s governance engine and are immediately discarded. Only the verdict is persisted.

What the Database Contains

The complete governance_events schema

Column What it contains
event_id Unique identifier (UUID)
tenant_id Your account identifier
provider Which AI provider was called
model Which model was called
decision allow, block, or warn
reason Why that decision was made
ris_level RIS-0 through RIS-4 integrity score
cii Cognitive Integrity Index (0.0–1.0)
drift_score Provider behavioral drift
timestamp When the decision was made
These are the only columns in the governance events table. There is no input column. No prompt column. No content column. No output column. No response column.

A regulator, auditor, or customer can independently verify this by inspecting the database schema. The absence of content is not a promise. It is a verifiable fact.
Audit Trail Integrity

How the audit trail works

Every governance decision is recorded in a SHA-256 hash-chained ledger.

Each record’s hash is computed as:

SHA256(previous_hash + canonical_JSON(record))

This creates a mathematical chain. Modifying any historical record breaks every hash that follows it. The entire chain can be verified at any time by anyone with access to the ledger file.

This is the same tamper-evidence mechanism used in certificate transparency logs and blockchain systems. It does not require trusting the platform operator. The math is the proof.

Tenant Isolation

Your data cannot be accessed by other tenants

This is enforced at three independent layers:

Layer 1

Key binding

Every API key is cryptographically bound to one tenant. The gateway rejects any request where the key’s tenant does not match the request’s tenant_id. This check happens before any processing.

Layer 2

Query parameterization

Every database query includes a tenant_id bound parameter. No query returns data across tenant boundaries. This is enforced in code, not by convention.

Layer 3

Event namespacing

Every governance event, usage record, and rate-limit counter is namespaced by tenant_id in both PostgreSQL and Redis. Namespaces do not overlap.

Cross-tenant data access is not a policy violation. It is architecturally impossible through the API.
Operator Visibility

What we can and cannot see

Platform operators CAN see
  • Governance decisions (allow / block / warn)
  • Integrity scores (RIS, CII, drift)
  • Usage metrics (call counts, latency)
  • Account information (company, plan, email)
  • Audit events for platform operations
Platform operators CANNOT see
  • Your prompt content
  • Your model outputs
  • Your users’ data
  • Your business logic or queries
  • Full plaintext of your provider keys
  • Webhook payload content delivered to your endpoints
  • Local model weights and inference data

We document this explicitly because transparency about operator access is a requirement for enterprise trust, not an optional disclosure.

Provider Key Protection

How your API keys are protected

When you register your own AI provider keys:

Your keys are encrypted using AES-128-CBC with HMAC-SHA256 before storage. The encryption key lives only in the platform environment, not in the database.

This means: if someone obtained a copy of the database, they would find only ciphertext they cannot decrypt without the environment secret. The database alone is not sufficient to recover your provider keys.

Your full key is never returned via API. Only the last 4 characters are shown as a confirmation that the correct key is registered.

Verifiable Claims

Every claim on this page can be verified

These are not promises. They are testable properties.

Claim

Prompts are not stored

Verify: Inspect the governance_events table schema. No content columns exist. The schema is the proof.

Claim

Audit trail is tamper-evident

Verify: Run lcac_ledger_verify.py against the ledger file. Any modification to any record produces a hash mismatch and exits with code 1.

Claim

Tenant isolation is enforced

Verify: Attempt to access another tenant’s data using your API key. The gateway returns HTTP 403 before any processing occurs.

Claim

Provider keys are encrypted at rest

Verify: Inspect the lcac_tenant_provider_keys table. Only ciphertext is stored. No plaintext values exist in any row.

Enterprise customers can request a technical security review. We will walk through the architecture, the code, and the database schema with your security team.
Webhook Security

How webhook delivery is secured

When you register a webhook endpoint, ATOM delivers governance events to your systems via HMAC-SHA256 signed payloads. You can independently verify every delivery.

Signing

HMAC-SHA256 signature

Every webhook payload is signed with a per-tenant secret. The signature is included in the X-ATOM-Signature header. Compute HMAC-SHA256(secret, payload_body) and compare to verify authenticity.

Privacy

No prompt content in payloads

Webhook payloads contain governance metadata only · decision, RIS level, CII score, timestamp, and tenant ID. Your prompt content and model outputs are never included. The payload structure mirrors the audit record: verdict without content.

Delivery

Fire-and-forget, non-blocking

Webhook delivery is asynchronous. A failed delivery does not affect the governance decision or the governed response returned to your application. Delivery is best-effort with a 5-second timeout per attempt.

Webhook secrets are generated per-tenant and never shared with the platform operator. Only you can verify webhook signatures from your tenant.

Questions about our security architecture?

Send questions to our security team or read the full technical documentation.

THE GOVERNANCE ADVISOR IS ITSELF GOVERNED

The AI Governance Advisor embedded in the ATOM console routes every query through the same pre-execution governance pipeline as your own AI calls. It cannot be used to bypass governance · it is subject to the same enforcement rules.

Prompt injection protection: 25 regex patterns block injection attempts before they reach Claude (sub-millisecond, zero API cost for blocked queries). Blocked patterns include system prompt extraction, role-switching (DAN, developer mode), cross-tenant data requests, and delimiter injection.

Tenant isolation: Governance context is fetched server-side on every query · client input cannot influence what data the advisor sees. No other tenant's data appears in any advisor context. Cross-tenant queries are blocked at the pattern layer.

Audit trail: Every advisor query is logged with tenant ID, block status, matched injection pattern (if blocked), and latency. Response filtering redacts API key patterns from Claude's output before delivery.