Quickstart
Get Arcax running locally in under 5 minutes using Docker Compose.
1. Authenticate & Start
# License key required — request at www.arcax.ai/#demo
docker login registry.arcax.ai --username $ARCAX_LICENSE_KEY
curl -sO https://install.arcax.ai/docker-compose.yml
cp .env.example .env
docker compose up -d
This starts: gateway (port 8000), postgres, redis, dashboard (port 3000).
2. Register your first source
curl -X POST http://localhost:8000/api/v1/sources \
-H "Content-Type: application/json" \
-d '{
"name": "petstore",
"type": "openapi",
"config": {
"spec_url": "https://petstore3.swagger.io/api/v3/openapi.json"
}
}'
3. List generated MCP tools
curl http://localhost:8000/api/v1/tools | jq '.tools[].name'
4. Call a tool
curl -X POST http://localhost:8000/mcp/call \
-H "Content-Type: application/json" \
-d '{"tool": "petstore.getPetById", "params": {"petId": 1}}'
Installation
Arcax is distributed as container images. Production deployments use Kubernetes with Helm.
Docker Compose (Dev / PoC)
docker compose -f docker-compose.yml up -d
Kubernetes (Production)
helm repo add arcax https://charts.www.arcax.ai
helm install arcax arcax/arcax \
--namespace arcax --create-namespace \
-f values-prod.yaml
Air-Gap Installation
# Export images for offline transfer
docker save arcax/gateway:latest | gzip > arcax-gateway.tar.gz
docker save arcax/dashboard:latest | gzip > arcax-dashboard.tar.gz
# On air-gapped machine
docker load < arcax-gateway.tar.gz
docker load < arcax-dashboard.tar.gz
Configuration
Key environment variables in .env:
| Variable | Default | Description |
|---|---|---|
DATABASE_URL | required | PostgreSQL connection string |
REDIS_URL | redis://redis:6379 | Redis for L2 JIT cache |
PKI_CA_KEY_PATH | certs/ca/ca.key | CA private key (Ed25519) |
HSM_LIB_PATH | optional | Path to PKCS#11 shared lib |
OPA_URL | http://opa:8181 | Open Policy Agent endpoint |
OTEL_EXPORTER_OTLP_ENDPOINT | optional | OTLP trace collector URL |
CACHE_TTL_L1 | 3600 | L1 in-memory cache TTL (seconds) |
CACHE_TTL_L2 | 86400 | L2 Redis cache TTL (seconds) |
Architecture
Arcax implements a 5-stage deterministic mediation pipeline that converts any enterprise data source into a certified MCP tool set.
Pipeline Stages
- Probe — introspects the source (OpenAPI spec fetch, SQL
INFORMATION_SCHEMAquery, SAP CDS introspection, or COBOL copybook parse). - Universal Capability Schema (UCS) — a source-agnostic JSON pivot format capturing endpoints, parameters, data types, and auth requirements.
- Contextualise — applies field-level transformations (type coercion, date normalisation, EBCDIC decoding) and injects OPA policy stubs per tool.
- MCP Generate — emits Python code: Pydantic v2 models + FastMCP
@mcp.tool()decorators — all generated at runtime, no codegen step. - JIT Cache — writes compiled tool to L1 (in-process dict) and L2 (Redis
SETEX). Subsequent calls bypass all stages. Cache invalidated via pub/sub on source update.
/api/v1/sources/{id}/probe endpoint runs stages 1–2 in isolation for debugging.
Zero-Trust Security
Auto-PKI Bootstrap
make certs-init # generates ca.key, ca.crt + node certs
On first start, AutoEnrollment issues 1-hour ephemeral leaf certificates JIT. Private keys are kept in memory only (or in HSM if HSM_LIB_PATH is set).
OPA Policy Example
package arcax.tool
default allow = false
allow {
input.tenant_id == data.tenants[_].id
input.tool_name in data.products[input.product_id].allowed_tools
}
Payload Signing Verification
import nacl.signing
import json, base64
pub_key_b64 = "..." # from GET /api/v1/keys/public
verify_key = nacl.signing.VerifyKey(base64.b64decode(pub_key_b64))
response = json.loads(tool_response_body)
sig = base64.b64decode(response["__sig__"])
verify_key.verify(json.dumps(response["data"]).encode(), sig)
FinOps & ROI Tracking
Every tool call emits a UsageTick event:
{
"tenant_id": "acme",
"source_id": "erp-sap",
"tool_name": "get_sales_order",
"bytes_in": 128,
"bytes_out": 4096,
"duration_ms": 45,
"timestamp": "2026-04-12T08:30:00Z",
"cost_unit": 0.0012
}
Query aggregated usage:
GET /api/v1/finops/usage?tenant=acme&period=30d&group_by=source
UAM Pipeline
The Universal Agent Mediation pipeline converts any enterprise source into certified MCP tools via 5 deterministic stages.
| Stage | Input | Output | Typical duration |
|---|---|---|---|
| 1. Probe | Source config | Raw schema (OpenAPI / SQL / SAP CDS / COBOL) | ~200 ms |
| 2. UCS | Raw schema | Universal Capability Schema (JSON) | ~50 ms |
| 3. Contextualise | UCS | Enriched UCS — type transforms + OPA stubs | ~30 ms |
| 4. MCP Generate | Enriched UCS | Pydantic v2 models + FastMCP @tool code | ~80 ms |
| 5. JIT Cache | Generated code | Compiled tool written to L1 + L2 | ~10 ms |
Trigger / inspect via API
# Re-run pipeline for a source
POST /api/v1/sources/{id}/refresh
# Response
{
"pipeline_run_id": "run_01j...",
"stages": {
"probe": {"status": "ok", "duration_ms": 187},
"ucs": {"status": "ok", "duration_ms": 44},
"ctx": {"status": "ok", "duration_ms": 31},
"gen": {"status": "ok", "duration_ms": 76},
"cache": {"status": "ok", "duration_ms": 9}
},
"tools_generated": 12
}
# Run stages 1-2 only (debug)
GET /api/v1/sources/{id}/probe
JIT Cache
A two-tier cache serves compiled MCP tools in <1 ms after the first pipeline run. No reprocessing on repeated calls.
| Tier | Backend | Latency | Scope |
|---|---|---|---|
| L1 | In-process dict (LRU 512) | < 1 µs | Per gateway pod |
| L2 | Redis SETEX (msgpack) | 0.5–2 ms | Cluster-wide |
Cache invalidation
# Automatic: pub/sub on source update
# Redis channel: arcax:cache:invalidate
# Payload: {"source_id": "erp-sap", "version": 4}
# Manual
DELETE /api/v1/sources/{id}/cache
# {"invalidated": true, "keys_removed": 2}
Warm-up on startup
Set CACHE_WARMUP=true to pre-populate L1 from L2 on gateway boot — eliminates cold-start latency.
Source Registry
PostgreSQL-backed catalogue of all registered sources, their versioned tool snapshots, and lifecycle states.
Lifecycle
draft → probing → active → archived
# List active sources
GET /api/v1/sources?status=active
[{
"id": "erp-sap", "name": "SAP S/4HANA", "type": "sap",
"status": "active", "tools_count": 23, "version": 4,
"last_synced": "2026-04-12T08:00:00Z"
}]
Versioning & rollback
# Rollback to a previous version
POST /api/v1/sources/{id}/rollback
{"version": 3}
# Export OpenAPI 3.1 spec from any version
GET /api/v1/sources/{id}/openapi?version=3
Tool filtering
PATCH /api/v1/sources/{id}
{
"tool_filter": ["get_sales_order", "list_materials"],
"tool_prefix": "sap."
}
REST / OpenAPI Connector
Introspects any OpenAPI 3.0/3.1 spec and generates one MCP tool per operation.
Registration
POST /api/v1/sources
{
"name": "petstore",
"type": "openapi",
"config": {
"spec_url": "https://petstore3.swagger.io/api/v3/openapi.json",
"base_url": "https://petstore3.swagger.io",
"auth": {"type": "bearer", "token_env": "PETSTORE_API_KEY"}
}
}
Auth methods
| Type | Required config keys |
|---|---|
none | — |
bearer | token_env |
basic | username_env, password_env |
api_key | header, key_env |
oauth2_client | token_url, client_id_env, client_secret_env |
spec_inline instead of spec_url to provide the OpenAPI object directly — no outbound network call required.
Database Connectors
Arcax introspects INFORMATION_SCHEMA to generate MCP tools for queries on tables and views.
| Engine | Type | Connection string |
|---|---|---|
| PostgreSQL / TimescaleDB | postgres | postgresql://user:pass@host:5432/db |
| MySQL / MariaDB | mysql | mysql+pymysql://user:pass@host/db |
| SQL Server | mssql | mssql+pyodbc://user:pass@host/db |
| Oracle | oracle | oracle+cx_oracle://user:pass@host/sid |
| SQLite | sqlite | sqlite:///path/to/file.db |
POST /api/v1/sources
{
"name": "warehouse-db",
"type": "postgres",
"config": {
"connection_url_env": "WAREHOUSE_DB_URL",
"schema": "public",
"table_filter": ["orders", "products", "inventory"],
"read_only": true
}
}
"read_only": true to generate SELECT-only tools. All queries use parameterised statements — no raw SQL construction.
SAP / ERP Connector
Native RFC/BAPI connectivity via PyRFC, plus CDS / OData for S/4HANA. Kyriba supported via OpenAPI.
SAP RFC / BAPI
POST /api/v1/sources
{
"name": "sap-s4",
"type": "sap",
"config": {
"ashost_env": "SAP_ASHOST",
"sysnr": "00",
"client": "100",
"user_env": "SAP_USER",
"passwd_env": "SAP_PASSWD",
"bapi_filter": ["BAPI_SALESORDER_*", "BAPI_MATERIAL_*"]
}
}
S/4HANA OData
{
"type": "sap_odata",
"config": {
"base_url": "https://s4hana.corp.local/sap/opu/odata/sap/",
"service": "API_SALES_ORDER_SRV",
"auth": {
"type": "basic",
"username_env": "SAP_USER",
"password_env": "SAP_PASSWD"
}
}
}
Kyriba (Treasury)
{
"type": "openapi",
"config": {
"spec_url": "https://api.kyriba.com/openapi/v1/openapi.json",
"auth": {
"type": "oauth2_client",
"token_url": "https://api.kyriba.com/oauth/token",
"client_id_env": "KYRIBA_CLIENT_ID",
"client_secret_env": "KYRIBA_CLIENT_SECRET"
}
}
}
Mainframe Connector
Connects to IBM z/OS via TN3270 terminal emulation. COBOL copybooks are parsed to generate fully-typed Pydantic models.
POST /api/v1/sources
{
"name": "mainframe-cics",
"type": "mainframe",
"config": {
"host_env": "MF_HOST",
"port": 23,
"lu_name": "LUA00001",
"copybook_dir": "/etc/arcax/copybooks/",
"encoding": "EBCDIC",
"transaction_map": {
"get_account": "ACCT",
"list_transactions": "TXNL"
}
}
}
COBOL copybook → Pydantic
01 ACCOUNT-RECORD.
05 ACCT-ID PIC X(10).
05 ACCT-BALANCE PIC S9(13)V99 COMP-3.
05 ACCT-DATE PIC 9(8).
# Generated model
class AccountRecord(BaseModel):
acct_id: str # max_length=10
acct_balance: Decimal # COMP-3 packed decimal
acct_date: date # YYYYMMDD auto-parsed
"encoding": "IBM-273" (German) or any codepage supported by Python's codecs.
OPA Policies
Every tool call is evaluated by Open Policy Agent before execution. Policies are hot-reloaded — no gateway restart needed.
Input schema
{
"tenant_id": "acme",
"agent_id": "claude-3-7",
"product_id": "prod_enterprise",
"tool_name": "sap.get_sales_order",
"source_id": "sap-s4",
"params": {"order_id": "0000012345"},
"timestamp": "2026-04-12T08:30:00Z",
"ip": "10.0.1.5"
}
Example policy
package arcax.tool
import future.keywords
default allow := false
allow if {
product := data.products[input.product_id]
input.tool_name in product.allowed_tools
input.tenant_id == product.tenant_id
}
# Block salary field access for non-HR agents
deny if {
input.tool_name == "workday.get_employee"
not input.agent_id in data.hr_agents
}
Push a policy update
PUT /api/v1/opa/policies/arcax.tool
Content-Type: text/plain
# Rego content here...
# Response
{"updated": true, "reloaded_at": "2026-04-12T09:00:00Z"}
Arcax ships with an embedded OPA sidecar. For external clusters, set OPA_URL in .env.
HSM Integration
PKCS#11 HSM support for production key management. Private keys never leave the hardware boundary.
Configuration
# .env
HSM_LIB_PATH=/usr/lib/softhsm/libsofthsm2.so # or vendor lib
HSM_TOKEN_LABEL=arcax-pki
HSM_PIN_ENV=HSM_PIN
Test with SoftHSM2
apt install softhsm2
softhsm2-util --init-token --slot 0 --label arcax-pki \
--so-pin 0000 --pin 1234
# Verify objects
pkcs11-tool --module /usr/lib/softhsm/libsofthsm2.so \
--login --pin 1234 --list-objects
Key operations with HSM active
- CA key is generated inside the HSM at
make certs-init— never exported - Leaf certificate signing uses PKCS#11
C_Sign - Ed25519 payload signatures use the HSM-backed key handle
- Dashboard shows 🔒 HSM Active badge in the settings panel
libcknfast.so, Thales Luna libCryptoki2_64.so, AWS CloudHSM libcloudhsm_pkcs11.so.
OpenTelemetry
Arcax emits OTLP traces, Prometheus metrics, and structured JSON logs for every operation.
Configuration
# .env
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
OTEL_SERVICE_NAME=arcax-gateway
OTEL_TRACES_SAMPLER=parentbased_traceidratio
OTEL_TRACES_SAMPLER_ARG=1.0
Trace spans
| Span | Key attributes |
|---|---|
mcp.tool.call | tool_name, tenant_id, cache_hit, duration_ms |
pipeline.run | source_id, tools_generated, stages_ok |
opa.eval | policy, result (allow/deny) |
cache.get | tier (L1/L2), hit |
Prometheus metrics
GET /metrics
arcax_tool_calls_total{tenant,source,tool,status}
arcax_tool_duration_seconds{quantile="0.99"}
arcax_cache_hit_ratio{tier="L1|L2"}
arcax_pipeline_duration_seconds
arcax_budget_used_ratio{tenant,source}
Full observability stack (Docker Compose)
docker compose \
-f docker-compose.yml \
-f docker-compose.otel.yml \
up -d
# Adds: otel-collector, Prometheus, Grafana (port 3001)
Kubernetes
The Helm chart deploys gateway, dashboard, OPA, Redis, PostgreSQL, and optional TimescaleDB with a single command.
Install
helm repo add arcax https://charts.arcax.ai
helm repo update
helm install arcax arcax/arcax \
--namespace arcax --create-namespace \
--set license.key=$ARCAX_LICENSE_KEY \
-f values-prod.yaml
Key values.yaml options
gateway:
replicas: 3
resources:
requests: { cpu: "500m", memory: "512Mi" }
limits: { cpu: "2", memory: "2Gi" }
redis:
enabled: true
architecture: replication
opa:
enabled: true
replicas: 2
pki:
hsm:
enabled: false
libPath: ""
otel:
enabled: true
endpoint: "http://otel-collector:4317"
Probes
livenessProbe:
httpGet: { path: /health, port: 8000 }
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet: { path: /ready, port: 8000 }
initialDelaySeconds: 5
periodSeconds: 10
Air-gap
# Mirror images to private registry
crane copy registry.arcax.ai/gateway:latest \
registry.corp.local/arcax/gateway:latest
# Override in values.yaml
global:
imageRegistry: registry.corp.local
API Reference
Full OpenAPI 3.1 spec available at runtime:
curl http://localhost:8000/openapi.json | jq .
Interactive Swagger UI at: http://localhost:8000/docs
Core endpoints
| Method | Path | Description |
|---|---|---|
POST | /api/v1/sources | Register a new data source |
GET | /api/v1/sources | List all registered sources |
GET | /api/v1/tools | List all generated MCP tools |
POST | /mcp/call | Invoke a tool (MCP standard) |
GET | /api/v1/finops/usage | Query usage + cost metrics |
GET | /api/v1/audit | Immutable audit log |
GET | /health | Liveness probe |
GET | /ready | Readiness probe |