Groq
Ultra-fast inference via Groq LPU. Governed llama-3.3-70b, llama-3.1-8b, mixtral-8x7b. Default provider on ATOM Platform.
provider: groq
model: llama-3.3-70b-versatile
Anthropic
Claude Haiku, Sonnet, and Opus via Anthropic API. Full governance stack including reasoning chain analysis.
provider: anthropic
model: claude-haiku-4-5-20251001
Google Gemini
Gemini Pro and Flash via Google AI Studio. LCAC context evaluation before each call.
provider: gemini
model: gemini-1.5-flash
Mistral
Mistral Large and Medium via Mistral AI API. European AI with full ATOM governance overhead.
provider: mistral
model: mistral-large-latest
OpenAI
GPT-4o, GPT-4 Turbo via OpenAI API. Governance wrapper in development. Join waitlist.
Local Models (GGUF)
On-premise llama.cpp models via GGUF format. Zero data egress. Full governance stack applies locally.
provider: local_llama
model: llama-3-8b-instruct.gguf
Python SDK
Full-featured Python client. Sync and async support. Typed interfaces. Streaming support.
pip install atomlabs
from atomlabs import ATOMClient
client = ATOMClient(api_key="...")
JavaScript / TypeScript
npm package with full TypeScript types. Works in Node.js, Deno, and edge runtimes.
npm install atomlabs
import { ATOMClient } from 'atomlabs'
LangChain
Drop-in governed LLM replacement for LangChain. Swap one line and get full ATOM governance on every chain call.
from atomlabs.langchain import ATOMLlm
llm = ATOMLlm(provider="groq")
# Use anywhere a BaseLLM is expected
REST API
Language-agnostic HTTP API. Any framework, any language. OpenAPI spec at api-docs.atomlabs.app.
POST api.atomlabs.app/v1/gateway
Authorization: Bearer {api_key}
X-Provider: groq
View API Docs →
LangChain Integration
Replace your existing LLM with ATOM's governed wrapper. One line change · full governance on every call.
Before
After (ATOM)
REST API
# Before ATOM · unmonitored, unaudited
from langchain.llms import Groq
llm = Groq(api_key="gsk_..." )
response = llm.invoke("Analyze this contract..." )
# No governance, no audit trail, no enforcement
# After ATOM · every call governed, audited, enforced
from atomlabs.langchain import ATOMLlm
llm = ATOMLlm(
api_key="atom_..." ,
provider="groq" ,
model="llama-3.3-70b-versatile"
)
response = llm.invoke("Analyze this contract..." )
# Governed · Audited · RIS-scored · LCAC-enforced
# 28ms overhead · Blocks on policy violation
# Any language · plain HTTP
curl -X POST https://api.atomlabs.app/v1/gateway \
-H "Authorization: Bearer atom_..." \
-H "Content-Type: application/json" \
-d '{
"provider": "groq",
"model": "llama-3.3-70b-versatile",
"messages": [
{"role": "user", "content": "Analyze this contract..."}
]
}'
# Returns: response + governance_event + ris_score
Slack
Enforcement approval workflows in Slack. Blocked governance events route to your channel for human review.
Microsoft Copilot
Endpoint agent for Microsoft Copilot Studio. Governance policy enforcement on all Copilot actions.
AWS Bedrock
Drop-in governance layer for AWS Bedrock model invocations. VPC-compatible deployment.
Azure OpenAI
ATOM governance wrapper for Azure OpenAI Service. Enterprise security and compliance.
CSV Export
Export all governance events, enforcement decisions, and audit trails as CSV from the Enterprise Console.
Webhooks
Real-time push delivery of governance events to your own systems. HMAC-SHA256 signed payloads.
POST your-endpoint.com/atom
X-ATOM-Event: governance.block
X-ATOM-Signature: sha256=...
Splunk
ATOM governance events as Splunk sourcetype. Pre-built dashboards for AI risk monitoring.
Datadog
Ship ATOM metrics and governance events to Datadog. Latency histograms, block rate monitors, drift alerts.
Elastic / SIEM
Index all ATOM governance events in Elasticsearch. Kibana dashboards for AI compliance monitoring.