Skip to main content

Docker Deployment Guide

Launch, verify, and operate the CTMS platform using Docker Compose.

Prerequisites

Before proceeding, complete the following:

  1. System Requirements — Ensure your server meets the minimum specs
  2. Installation — Clone the repo, configure .env.production, and build the API gateway
  3. Initial Setup & Configuration — Provision Frappe with DocTypes, RBAC, and seed data

1. Docker Compose Profiles

Available Profiles

ProfileServices
(default/core)Caddy, KrakenD, Zynexa, Sublink, ODM API
analyticsCube.js, Cubestore, MCP Server
lakehouseLakehouse PostgreSQL, Ingester, dbt
observabilityOpenObserve, OTEL Collector
initctms-init (one-shot)
allEverything except linux-logs

Start Commands

# Core only
docker compose --env-file .env.production up -d

# Core + Analytics + AI
docker compose --env-file .env.production --profile analytics up -d

# Full stack
docker compose --env-file .env.production \
--profile analytics --profile lakehouse --profile observability up -d

Production Overrides (EC2 / IP-Based Access)

Exposes all ports directly and sets RUNTIME_* URL overrides pointing to http://EC2_PUBLIC_IP:port:

docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.production --profile analytics --profile lakehouse up -d

2. EC2 Deploy Script

scripts/deploy-ec2.sh automates SSH-based deployment to an AWS EC2 instance:

./scripts/deploy-ec2.sh test-ssh   # Test connection
./scripts/deploy-ec2.sh setup # Install Docker on the instance
./scripts/deploy-ec2.sh deploy # Deploy the full stack
./scripts/deploy-ec2.sh update # Pull latest images + restart
./scripts/deploy-ec2.sh status # Container status
./scripts/deploy-ec2.sh logs # View logs
./scripts/deploy-ec2.sh ssh # SSH into the instance

SSH key: scripts/ssh-keys/your-ec2-key.pem | Deploy path: /opt/ctms-deployment

For the full EC2 setup guide (security groups, DNS, HTTPS), see AWS EC2 Deployment.


3. Verification

Health Endpoints

ServiceURLOK
Caddyhttp://localhost:8888/health200
API Gatewayhttp://localhost:9080/__health200
Zynexahttp://localhost:3000/api/health200
Sublinkhttp://localhost:3001/health200
Cube.jshttp://localhost:4000/readyz200
MCP Serverhttp://localhost:8006/health200
ODM APIhttp://localhost:8001/health200
OpenObservehttp://localhost:5080/healthz200

Cube Query Smoke Test

curl -s http://localhost:4000/cubejs-api/v1/load \
-H "Content-Type: application/json" \
-d '{"query":{"measures":["Studies.count"]}}' | jq .

Database Table Counts

docker compose --env-file .env.production exec lakehouse-db \
psql -U ctms_user -d ctms_dlh -c \
"SELECT schemaname, count(*) FROM pg_tables WHERE schemaname IN ('bronze','silver','gold') GROUP BY 1 ORDER BY 1;"

4. Production App URLs

Replace <EC2_PUBLIC_IP> with the value of EC2_PUBLIC_IP from your .env.production.

ServicePortURL
Zynexa (CTMS App)3000http://<EC2_PUBLIC_IP>:3000
Sublink3001http://<EC2_PUBLIC_IP>:3001
KrakenD API Gateway9080http://<EC2_PUBLIC_IP>:9080
Cube.js (Semantic Layer)4000http://<EC2_PUBLIC_IP>:4000
MCP Server (AI Agent)8006http://<EC2_PUBLIC_IP>:8006
ODM API8001http://<EC2_PUBLIC_IP>:8001
OpenObserve5080http://<EC2_PUBLIC_IP>:5080
Lakehouse DB5433<EC2_PUBLIC_IP>:5433

Credentials for each service come from the corresponding env vars in .env.production (CUBEJS_API_SECRET, OPENOBSERVE_ROOT_EMAIL / PASSWORD, etc.).


5. Data Pipeline: Ingester and dbt

Sample Data Required

The data pipeline ingests data from Frappe Cloud into the analytics database. Before running it, ensure your Frappe instance has been populated with sample clinical data — e.g. Studies, Sites, Subjects, Practitioners, Vital Signs, Drug Prescriptions, etc. Without this data, the pipeline will produce empty tables and the dashboards/analytics will have nothing to display.

After verifying the platform is running, populate the analytics database by running the data pipeline.

Frappe Cloud --> Ingester (DLT) --> Lakehouse DB (PostgreSQL) <-- dbt
Bronze layer Bronze, Silver, Gold 197 tests

For detailed documentation on each stage, see Data Pipeline — Ingester and Data Pipeline — dbt.

5.1 Start Lakehouse DB and Run Pipeline

# Start DB and wait for healthy
docker compose --env-file .env.production --profile lakehouse up -d lakehouse-db

# Run Ingester: Bronze (~46 tables)
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester

# Run dbt: Silver + Gold (bronze ~63, silver ~7, gold ~28 tables)
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt daily

The daily command runs: dbt deps, dbt build, dbt run --select elementary. Expected dbt output: PASS=197 WARN=5 ERROR=0

5.2 Remote Database Alternative

TARGET_DB_HOST=your-db-host.example.com
TARGET_DB_PORT=5432
TARGET_DB_SSLMODE=require

5.3 Schedule with Cron

0 2 * * * cd /opt/ctms-deployment && \
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester && \
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt daily \
>> /var/log/ctms-pipeline.log 2>&1

6. External / Vendor Service Dashboards

Supabase (Auth Provider)

Supabase can run as a cloud instance or self-hosted via the supabase/ folder in ctms.devops:

DeploymentDashboard URLSUPABASE_URL
Cloudhttps://supabase.com/dashboard/project/<ref>https://<ref>.supabase.co
Self-hostedhttp://localhost:8000 (Studio)http://localhost:8000
FieldEnv Var
Project URLSUPABASE_URL
Anon KeySUPABASE_ANON_KEY
Service KeySUPABASE_KEY
DB Credentialsctms_user / ctms_pwd (self-hosted)

Self-hosted Supabase ships with CTMS-specific init scripts: profiles table, handle_new_user() trigger, devices, medication_consumption_logs, notification_logs, and get_medication_status(). These are applied by the ctms-supabase-seed container — see Self-Hosted Vendor Stacks for the full setup guide.

Frappe (Clinical Data Backend)

Frappe can run as Frappe Cloud or self-hosted via the frappe-marley-health/ folder in ctms.devops. For self-hosted Frappe, the setup service automates wizard completion, admin user creation, and API token generation — see Self-Hosted Vendor Stacks.

DeploymentDashboard URLFRAPPE_URL
CloudFrappe Cloud dashboardhttps://<site>.frappe.cloud
Self-hostedhttp://localhost:8080/apphttp://localhost:8080
FieldEnv Var
Site URLFRAPPE_URL
API TokenFRAPPE_API_TOKEN
Dashboard<FRAPPE_URL>/app

To regenerate the API token on a self-hosted instance:

docker exec -w /home/frappe/frappe-bench frappe-marley-health-backend-1 \
bash -c 'source env/bin/activate && python3 /setup/frappe-generate-token.py'

OpenAI (AI / MCP Server)

FieldEnv Var
API KeyOPENAI_API_KEY
ModelOPENAI_MODEL

All values are configured in .env.production. See Installation for the full list.


7. Docker Network Architecture

All three stacks — Supabase, Frappe, and CTMS core — share a single Docker bridge network called ctms-network. This allows every container to discover and communicate with any other container by name, without port mapping.

Shared Network: ctms-network

┌─────────────────────────── ctms-network (bridge) ──────────────────────────────┐
│ │
│ ┌─ Supabase ──────────┐ ┌─ Frappe ──────────┐ ┌─ CTMS Core ─────────────┐ │
│ │ supabase-kong :8000 │ │ frontend :8080 │ │ caddy, api-gateway │ │
│ │ supabase-db :5432 │ │ backend :8000 │ │ zynexa, sublink │ │
│ │ supabase-auth :9999 │ │ db (MariaDB) │ │ cube, mcp-server │ │
│ │ supabase-rest :3000 │ │ redis-cache/queue │ │ lakehouse-db, odm-api │ │
│ │ supabase-realtime │ │ websocket │ │ otel, openobserve │ │
│ │ supabase-storage │ │ scheduler │ │ │ │
│ │ supabase-studio │ │ queue-short/long │ │ │ │
│ │ ...13 services │ │ ...11 services │ │ ...16+ services │ │
│ └──────────────────────┘ └───────────────────┘ └─────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────────────────────┘

How It Works

The network is defined as name: ctms-network with driver: bridge in all three docker-compose.yml files. Whichever stack starts first creates the network; the remaining stacks reuse it.

Recommended startup order:

# 1. Supabase (auth provider – needed by CTMS apps)
cd supabase && make up

# 2. Frappe (clinical data backend – needed by CTMS apps)
cd frappe-marley-health && make up

# 3. CTMS core services
docker compose --env-file .env.production --profile all up -d

Cross-Stack Container Discovery

Because all containers share one network, they can reach each other directly:

FromToInternal URL
Zynexa / KrakenDSupabase APIhttp://supabase-kong:8000
Zynexa / KrakenDFrappehttp://frontend:8080
Supabase Edge FunctionsFrappe APIhttp://frontend:8080
IngesterFrappe (via Caddy)https://api.localhost
Any containerSupabase DBpostgresql://ctms_user:ctms_pwd@supabase-db:5432/postgres
Any containerLakehouse DBpostgresql://ctms_user:ctms_pwd@lakehouse-db:5432/ctms_dlh

Network Commands

# Inspect the shared network
docker network inspect ctms-network

# List containers on the network
docker network inspect ctms-network --format '{{range .Containers}}{{.Name}} {{end}}'

# Create the network manually (not normally needed)
docker network create --driver bridge ctms-network

# Check if the network exists
docker network ls | grep ctms-network

Stopping Stacks

When stopping, note that the last stack to stop removes the shared network. If another stack is still running, docker compose down will skip network removal automatically.

# Stop in reverse order (CTMS → Frappe → Supabase)
docker compose --env-file .env.production down
cd frappe-marley-health && make down
cd supabase && make down

8. Database Connection (Lakehouse)

FieldEnv VarDefault
Host (external)EC2_PUBLIC_IP or localhost
Host (Docker internal)lakehouse-db
Port (external)LAKEHOUSE_DB_PORT5433
Port (internal)5432
DatabaseTARGET_DB_NAMEctms_dlh
UserTARGET_DB_USERctms_user
PasswordTARGET_DB_PASSWORD(set in .env.production)

Connection string pattern:

postgresql://<TARGET_DB_USER>:<TARGET_DB_PASSWORD>@<HOST>:<PORT>/<TARGET_DB_NAME>
# From your machine (EC2)
psql -h <EC2_PUBLIC_IP> -p 5433 -U <TARGET_DB_USER> -d <TARGET_DB_NAME>

# From your machine (local Docker)
psql -h localhost -p 5433 -U <TARGET_DB_USER> -d <TARGET_DB_NAME>

# From inside Docker network
psql -h lakehouse-db -p 5432 -U <TARGET_DB_USER> -d <TARGET_DB_NAME>

9. Local Development URLs

Add to /etc/hosts:

127.0.0.1 api.localhost zynexa.localhost sublink.localhost cube.localhost mcp.localhost odm.localhost observe.localhost
ServiceURL
Zynexahttps://zynexa.localhost
Sublinkhttps://sublink.localhost
API Gatewayhttps://api.localhost
Cube.jshttps://cube.localhost
MCP Serverhttps://mcp.localhost
ODM APIhttps://odm.localhost
OpenObservehttps://observe.localhost
Lakehouse DBlocalhost:5433

10. Operations

# Logs
docker compose --env-file .env.production logs -f cube
docker compose --env-file .env.production logs --tail 100 zynexa

# Restart a service
docker compose --env-file .env.production restart cube

# Stop all
docker compose --env-file .env.production --profile analytics --profile lakehouse down

# Update (pull + recreate)
docker compose --env-file .env.production --profile analytics --profile lakehouse pull
docker compose --env-file .env.production --profile analytics --profile lakehouse up -d

# Pull latest KrakenD image + restart
docker compose --env-file .env.production pull api-gateway && \
docker compose --env-file .env.production up -d api-gateway

# Clean restart (WARNING: destroys data)
docker compose --env-file .env.production --profile analytics --profile lakehouse down -v

11. Quick Start Checklist

  1. Complete System Requirements, Installation, and Initial Setup
  2. Start platform: docker compose --env-file .env.production --profile analytics up -d
  3. Verify health endpoints (Section 3)
  4. Start lakehouse: docker compose --env-file .env.production --profile lakehouse up -d lakehouse-db
  5. Run ingester: docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester
  6. Run dbt: docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt daily
  7. Log in to Zynexa

12. Troubleshooting

ProblemCauseFix
Cube unhealthyGold tables missingRun ingester + dbt (section 5)
MCP can't reach CubeWrong CUBE_API_URLMust be http://cube:4000/cubejs-api/v1
Cubestore crash (ARM)amd64-only imageSet CUBEJS_DEV_MODE=true, unset CUBEJS_CUBESTORE_HOST
Duplicate CORS headersCaddy + backend both add headerCaddy handle @options handles preflight; backends should not add own CORS
Ingester can't reach FrappeFRAPPE_BASE_URL unresolvableNeeds Caddy + KrakenD running; uses extra_hosts: api.localhost:host-gateway
Continue Wait on queriesCubestore unreachableStart cubestore or comment out CUBEJS_CUBESTORE_HOST

Architecture

                     +----------------------+
| Frappe Cloud |
+----------+-----------+
+-----------------+------------------+
v v v
+----------+ +------------+ +------------+
| Caddy | | Ingester | | ctms-init |
|(Rev Proxy)| |(DLT>Bronze)| |(Provision) |
+----+-----+ +-----+------+ +------------+
| |
+----+-----+ +------v-----+
| KrakenD | |Lakehouse DB|<-- dbt (Bronze>Silver>Gold)
|(API GW) | |(PostgreSQL)|
+----+-----+ +------+-----+
| |
+------+---+---+ +-----+------+
v v v | Cube.js |-- Cubestore
Zynexa Sublink ODM |(Semantic) |
+-----+------+
v
+------------+ +----------+
| MCP Server | | Supabase |
|(AI Agent) | | (Auth) |
+------------+ +----------+