Quickstart

Get Arcax running locally in under 5 minutes using Docker Compose.

Prerequisites: Docker 24+, Docker Compose v2+, 4 GB RAM. No cloud account or external internet access required after image pull.
Licensed software. Arcax is distributed via a private container registry. A valid license key is required. Request access →

1. Authenticate & Start

# License key required — request at www.arcax.ai/#demo
docker login registry.arcax.ai --username $ARCAX_LICENSE_KEY
curl -sO https://install.arcax.ai/docker-compose.yml
cp .env.example .env
docker compose up -d

This starts: gateway (port 8000), postgres, redis, dashboard (port 3000).

2. Register your first source

curl -X POST http://localhost:8000/api/v1/sources \
  -H "Content-Type: application/json" \
  -d '{
    "name": "petstore",
    "type": "openapi",
    "config": {
      "spec_url": "https://petstore3.swagger.io/api/v3/openapi.json"
    }
  }'

3. List generated MCP tools

curl http://localhost:8000/api/v1/tools | jq '.tools[].name'

4. Call a tool

curl -X POST http://localhost:8000/mcp/call \
  -H "Content-Type: application/json" \
  -d '{"tool": "petstore.getPetById", "params": {"petId": 1}}'
Open http://localhost:3000 for the Arcax dashboard — live nerve map, audit log, and ROI metrics.

Installation

Arcax is distributed as container images. Production deployments use Kubernetes with Helm.

Docker Compose (Dev / PoC)

docker compose -f docker-compose.yml up -d

Kubernetes (Production)

helm repo add arcax https://charts.www.arcax.ai
helm install arcax arcax/arcax \
  --namespace arcax --create-namespace \
  -f values-prod.yaml

Air-Gap Installation

# Export images for offline transfer
docker save arcax/gateway:latest | gzip > arcax-gateway.tar.gz
docker save arcax/dashboard:latest | gzip > arcax-dashboard.tar.gz

# On air-gapped machine
docker load < arcax-gateway.tar.gz
docker load < arcax-dashboard.tar.gz

Configuration

Key environment variables in .env:

VariableDefaultDescription
DATABASE_URLrequiredPostgreSQL connection string
REDIS_URLredis://redis:6379Redis for L2 JIT cache
PKI_CA_KEY_PATHcerts/ca/ca.keyCA private key (Ed25519)
HSM_LIB_PATHoptionalPath to PKCS#11 shared lib
OPA_URLhttp://opa:8181Open Policy Agent endpoint
OTEL_EXPORTER_OTLP_ENDPOINToptionalOTLP trace collector URL
CACHE_TTL_L13600L1 in-memory cache TTL (seconds)
CACHE_TTL_L286400L2 Redis cache TTL (seconds)

Architecture

Arcax implements a 5-stage deterministic mediation pipeline that converts any enterprise data source into a certified MCP tool set.

Pipeline Stages

  1. Probe — introspects the source (OpenAPI spec fetch, SQL INFORMATION_SCHEMA query, SAP CDS introspection, or COBOL copybook parse).
  2. Universal Capability Schema (UCS) — a source-agnostic JSON pivot format capturing endpoints, parameters, data types, and auth requirements.
  3. Contextualise — applies field-level transformations (type coercion, date normalisation, EBCDIC decoding) and injects OPA policy stubs per tool.
  4. MCP Generate — emits Python code: Pydantic v2 models + FastMCP @mcp.tool() decorators — all generated at runtime, no codegen step.
  5. JIT Cache — writes compiled tool to L1 (in-process dict) and L2 (Redis SETEX). Subsequent calls bypass all stages. Cache invalidated via pub/sub on source update.
All 5 stages are deterministic and testable. The /api/v1/sources/{id}/probe endpoint runs stages 1–2 in isolation for debugging.

Zero-Trust Security

Auto-PKI Bootstrap

make certs-init  # generates ca.key, ca.crt + node certs

On first start, AutoEnrollment issues 1-hour ephemeral leaf certificates JIT. Private keys are kept in memory only (or in HSM if HSM_LIB_PATH is set).

OPA Policy Example

package arcax.tool

default allow = false

allow {
  input.tenant_id == data.tenants[_].id
  input.tool_name in data.products[input.product_id].allowed_tools
}

Payload Signing Verification

import nacl.signing
import json, base64

pub_key_b64 = "..."  # from GET /api/v1/keys/public
verify_key = nacl.signing.VerifyKey(base64.b64decode(pub_key_b64))

response = json.loads(tool_response_body)
sig = base64.b64decode(response["__sig__"])
verify_key.verify(json.dumps(response["data"]).encode(), sig)

FinOps & ROI Tracking

Every tool call emits a UsageTick event:

{
  "tenant_id": "acme",
  "source_id": "erp-sap",
  "tool_name": "get_sales_order",
  "bytes_in": 128,
  "bytes_out": 4096,
  "duration_ms": 45,
  "timestamp": "2026-04-12T08:30:00Z",
  "cost_unit": 0.0012
}

Query aggregated usage:

GET /api/v1/finops/usage?tenant=acme&period=30d&group_by=source

UAM Pipeline

The Universal Agent Mediation pipeline converts any enterprise source into certified MCP tools via 5 deterministic stages.

StageInputOutputTypical duration
1. ProbeSource configRaw schema (OpenAPI / SQL / SAP CDS / COBOL)~200 ms
2. UCSRaw schemaUniversal Capability Schema (JSON)~50 ms
3. ContextualiseUCSEnriched UCS — type transforms + OPA stubs~30 ms
4. MCP GenerateEnriched UCSPydantic v2 models + FastMCP @tool code~80 ms
5. JIT CacheGenerated codeCompiled tool written to L1 + L2~10 ms

Trigger / inspect via API

# Re-run pipeline for a source
POST /api/v1/sources/{id}/refresh

# Response
{
  "pipeline_run_id": "run_01j...",
  "stages": {
    "probe": {"status": "ok", "duration_ms": 187},
    "ucs":   {"status": "ok", "duration_ms": 44},
    "ctx":   {"status": "ok", "duration_ms": 31},
    "gen":   {"status": "ok", "duration_ms": 76},
    "cache": {"status": "ok", "duration_ms": 9}
  },
  "tools_generated": 12
}

# Run stages 1-2 only (debug)
GET /api/v1/sources/{id}/probe

JIT Cache

A two-tier cache serves compiled MCP tools in <1 ms after the first pipeline run. No reprocessing on repeated calls.

TierBackendLatencyScope
L1In-process dict (LRU 512)< 1 µsPer gateway pod
L2Redis SETEX (msgpack)0.5–2 msCluster-wide

Cache invalidation

# Automatic: pub/sub on source update
# Redis channel: arcax:cache:invalidate
# Payload: {"source_id": "erp-sap", "version": 4}

# Manual
DELETE /api/v1/sources/{id}/cache
# {"invalidated": true, "keys_removed": 2}

Warm-up on startup

Set CACHE_WARMUP=true to pre-populate L1 from L2 on gateway boot — eliminates cold-start latency.

Source Registry

PostgreSQL-backed catalogue of all registered sources, their versioned tool snapshots, and lifecycle states.

Lifecycle

draftprobingactivearchived

# List active sources
GET /api/v1/sources?status=active

[{
  "id": "erp-sap", "name": "SAP S/4HANA", "type": "sap",
  "status": "active", "tools_count": 23, "version": 4,
  "last_synced": "2026-04-12T08:00:00Z"
}]

Versioning & rollback

# Rollback to a previous version
POST /api/v1/sources/{id}/rollback
{"version": 3}

# Export OpenAPI 3.1 spec from any version
GET /api/v1/sources/{id}/openapi?version=3

Tool filtering

PATCH /api/v1/sources/{id}
{
  "tool_filter": ["get_sales_order", "list_materials"],
  "tool_prefix": "sap."
}

REST / OpenAPI Connector

Introspects any OpenAPI 3.0/3.1 spec and generates one MCP tool per operation.

Registration

POST /api/v1/sources
{
  "name": "petstore",
  "type": "openapi",
  "config": {
    "spec_url": "https://petstore3.swagger.io/api/v3/openapi.json",
    "base_url": "https://petstore3.swagger.io",
    "auth": {"type": "bearer", "token_env": "PETSTORE_API_KEY"}
  }
}

Auth methods

TypeRequired config keys
none
bearertoken_env
basicusername_env, password_env
api_keyheader, key_env
oauth2_clienttoken_url, client_id_env, client_secret_env
For private APIs or air-gap environments, use spec_inline instead of spec_url to provide the OpenAPI object directly — no outbound network call required.

Database Connectors

Arcax introspects INFORMATION_SCHEMA to generate MCP tools for queries on tables and views.

EngineTypeConnection string
PostgreSQL / TimescaleDBpostgrespostgresql://user:pass@host:5432/db
MySQL / MariaDBmysqlmysql+pymysql://user:pass@host/db
SQL Servermssqlmssql+pyodbc://user:pass@host/db
Oracleoracleoracle+cx_oracle://user:pass@host/sid
SQLitesqlitesqlite:///path/to/file.db
POST /api/v1/sources
{
  "name": "warehouse-db",
  "type": "postgres",
  "config": {
    "connection_url_env": "WAREHOUSE_DB_URL",
    "schema": "public",
    "table_filter": ["orders", "products", "inventory"],
    "read_only": true
  }
}
Security: Set "read_only": true to generate SELECT-only tools. All queries use parameterised statements — no raw SQL construction.

SAP / ERP Connector

Native RFC/BAPI connectivity via PyRFC, plus CDS / OData for S/4HANA. Kyriba supported via OpenAPI.

SAP RFC / BAPI

POST /api/v1/sources
{
  "name": "sap-s4",
  "type": "sap",
  "config": {
    "ashost_env": "SAP_ASHOST",
    "sysnr": "00",
    "client": "100",
    "user_env": "SAP_USER",
    "passwd_env": "SAP_PASSWD",
    "bapi_filter": ["BAPI_SALESORDER_*", "BAPI_MATERIAL_*"]
  }
}

S/4HANA OData

{
  "type": "sap_odata",
  "config": {
    "base_url": "https://s4hana.corp.local/sap/opu/odata/sap/",
    "service": "API_SALES_ORDER_SRV",
    "auth": {
      "type": "basic",
      "username_env": "SAP_USER",
      "password_env": "SAP_PASSWD"
    }
  }
}

Kyriba (Treasury)

{
  "type": "openapi",
  "config": {
    "spec_url": "https://api.kyriba.com/openapi/v1/openapi.json",
    "auth": {
      "type": "oauth2_client",
      "token_url": "https://api.kyriba.com/oauth/token",
      "client_id_env": "KYRIBA_CLIENT_ID",
      "client_secret_env": "KYRIBA_CLIENT_SECRET"
    }
  }
}

Mainframe Connector

Connects to IBM z/OS via TN3270 terminal emulation. COBOL copybooks are parsed to generate fully-typed Pydantic models.

POST /api/v1/sources
{
  "name": "mainframe-cics",
  "type": "mainframe",
  "config": {
    "host_env": "MF_HOST",
    "port": 23,
    "lu_name": "LUA00001",
    "copybook_dir": "/etc/arcax/copybooks/",
    "encoding": "EBCDIC",
    "transaction_map": {
      "get_account":       "ACCT",
      "list_transactions": "TXNL"
    }
  }
}

COBOL copybook → Pydantic

       01 ACCOUNT-RECORD.
          05 ACCT-ID      PIC X(10).
          05 ACCT-BALANCE PIC S9(13)V99 COMP-3.
          05 ACCT-DATE    PIC 9(8).

# Generated model
class AccountRecord(BaseModel):
    acct_id:      str     # max_length=10
    acct_balance: Decimal # COMP-3 packed decimal
    acct_date:    date    # YYYYMMDD auto-parsed
EBCDIC → UTF-8 conversion uses IBM-1047 by default. Override via "encoding": "IBM-273" (German) or any codepage supported by Python's codecs.

OPA Policies

Every tool call is evaluated by Open Policy Agent before execution. Policies are hot-reloaded — no gateway restart needed.

Input schema

{
  "tenant_id":  "acme",
  "agent_id":   "claude-3-7",
  "product_id": "prod_enterprise",
  "tool_name":  "sap.get_sales_order",
  "source_id":  "sap-s4",
  "params":     {"order_id": "0000012345"},
  "timestamp":  "2026-04-12T08:30:00Z",
  "ip":         "10.0.1.5"
}

Example policy

package arcax.tool
import future.keywords

default allow := false

allow if {
  product := data.products[input.product_id]
  input.tool_name in product.allowed_tools
  input.tenant_id == product.tenant_id
}

# Block salary field access for non-HR agents
deny if {
  input.tool_name == "workday.get_employee"
  not input.agent_id in data.hr_agents
}

Push a policy update

PUT /api/v1/opa/policies/arcax.tool
Content-Type: text/plain

# Rego content here...

# Response
{"updated": true, "reloaded_at": "2026-04-12T09:00:00Z"}

Arcax ships with an embedded OPA sidecar. For external clusters, set OPA_URL in .env.

HSM Integration

PKCS#11 HSM support for production key management. Private keys never leave the hardware boundary.

Configuration

# .env
HSM_LIB_PATH=/usr/lib/softhsm/libsofthsm2.so   # or vendor lib
HSM_TOKEN_LABEL=arcax-pki
HSM_PIN_ENV=HSM_PIN

Test with SoftHSM2

apt install softhsm2
softhsm2-util --init-token --slot 0 --label arcax-pki \
  --so-pin 0000 --pin 1234

# Verify objects
pkcs11-tool --module /usr/lib/softhsm/libsofthsm2.so \
  --login --pin 1234 --list-objects

Key operations with HSM active

  • CA key is generated inside the HSM at make certs-init — never exported
  • Leaf certificate signing uses PKCS#11 C_Sign
  • Ed25519 payload signatures use the HSM-backed key handle
  • Dashboard shows 🔒 HSM Active badge in the settings panel
Tested vendor libs: nCipher libcknfast.so, Thales Luna libCryptoki2_64.so, AWS CloudHSM libcloudhsm_pkcs11.so.

OpenTelemetry

Arcax emits OTLP traces, Prometheus metrics, and structured JSON logs for every operation.

Configuration

# .env
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
OTEL_SERVICE_NAME=arcax-gateway
OTEL_TRACES_SAMPLER=parentbased_traceidratio
OTEL_TRACES_SAMPLER_ARG=1.0

Trace spans

SpanKey attributes
mcp.tool.calltool_name, tenant_id, cache_hit, duration_ms
pipeline.runsource_id, tools_generated, stages_ok
opa.evalpolicy, result (allow/deny)
cache.gettier (L1/L2), hit

Prometheus metrics

GET /metrics

arcax_tool_calls_total{tenant,source,tool,status}
arcax_tool_duration_seconds{quantile="0.99"}
arcax_cache_hit_ratio{tier="L1|L2"}
arcax_pipeline_duration_seconds
arcax_budget_used_ratio{tenant,source}

Full observability stack (Docker Compose)

docker compose \
  -f docker-compose.yml \
  -f docker-compose.otel.yml \
  up -d
# Adds: otel-collector, Prometheus, Grafana (port 3001)

Kubernetes

The Helm chart deploys gateway, dashboard, OPA, Redis, PostgreSQL, and optional TimescaleDB with a single command.

Install

helm repo add arcax https://charts.arcax.ai
helm repo update

helm install arcax arcax/arcax \
  --namespace arcax --create-namespace \
  --set license.key=$ARCAX_LICENSE_KEY \
  -f values-prod.yaml

Key values.yaml options

gateway:
  replicas: 3
  resources:
    requests: { cpu: "500m", memory: "512Mi" }
    limits:   { cpu: "2",    memory: "2Gi"   }

redis:
  enabled: true
  architecture: replication

opa:
  enabled: true
  replicas: 2

pki:
  hsm:
    enabled: false
    libPath: ""

otel:
  enabled: true
  endpoint: "http://otel-collector:4317"

Probes

livenessProbe:
  httpGet: { path: /health, port: 8000 }
  initialDelaySeconds: 10
  periodSeconds: 30

readinessProbe:
  httpGet: { path: /ready, port: 8000 }
  initialDelaySeconds: 5
  periodSeconds: 10

Air-gap

# Mirror images to private registry
crane copy registry.arcax.ai/gateway:latest \
  registry.corp.local/arcax/gateway:latest

# Override in values.yaml
global:
  imageRegistry: registry.corp.local

API Reference

Full OpenAPI 3.1 spec available at runtime:

curl http://localhost:8000/openapi.json | jq .

Interactive Swagger UI at: http://localhost:8000/docs

Core endpoints

MethodPathDescription
POST/api/v1/sourcesRegister a new data source
GET/api/v1/sourcesList all registered sources
GET/api/v1/toolsList all generated MCP tools
POST/mcp/callInvoke a tool (MCP standard)
GET/api/v1/finops/usageQuery usage + cost metrics
GET/api/v1/auditImmutable audit log
GET/healthLiveness probe
GET/readyReadiness probe