Docker Deployment Guide
Launch, verify, and operate the CTMS platform using Docker Compose.
Before proceeding, complete the following:
- System Requirements — Ensure your server meets the minimum specs
- Installation — Clone the repo, configure
.env.production, and build the API gateway - Initial Setup & Configuration — Provision Frappe with DocTypes, RBAC, and seed data
1. Docker Compose Profiles
Available Profiles
| Profile | Services |
|---|---|
| (default/core) | Caddy, KrakenD, Zynexa, Sublink, ODM API |
| analytics | Cube.js, Cubestore, MCP Server |
| lakehouse | Lakehouse PostgreSQL, Ingester, dbt |
| observability | OpenObserve, OTEL Collector |
| init | ctms-init (one-shot) |
| all | Everything except linux-logs |
Start Commands
# Core only
docker compose --env-file .env.production up -d
# Core + Analytics + AI
docker compose --env-file .env.production --profile analytics up -d
# Full stack
docker compose --env-file .env.production \
--profile analytics --profile lakehouse --profile observability up -d
Production Overrides (EC2 / IP-Based Access)
Exposes all ports directly and sets RUNTIME_* URL overrides pointing to http://EC2_PUBLIC_IP:port:
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.production --profile analytics --profile lakehouse up -d
2. EC2 Deploy Script
scripts/deploy-ec2.sh automates SSH-based deployment to an AWS EC2 instance:
./scripts/deploy-ec2.sh test-ssh # Test connection
./scripts/deploy-ec2.sh setup # Install Docker on the instance
./scripts/deploy-ec2.sh deploy # Deploy the full stack
./scripts/deploy-ec2.sh update # Pull latest images + restart
./scripts/deploy-ec2.sh status # Container status
./scripts/deploy-ec2.sh logs # View logs
./scripts/deploy-ec2.sh ssh # SSH into the instance
SSH key: scripts/ssh-keys/your-ec2-key.pem | Deploy path: /opt/ctms-deployment
For the full EC2 setup guide (security groups, DNS, HTTPS), see AWS EC2 Deployment.
3. Verification
Health Endpoints
| Service | URL | OK |
|---|---|---|
| Caddy | http://localhost:8888/health | 200 |
| API Gateway | http://localhost:9080/__health | 200 |
| Zynexa | http://localhost:3000/api/health | 200 |
| Sublink | http://localhost:3001/health | 200 |
| Cube.js | http://localhost:4000/readyz | 200 |
| MCP Server | http://localhost:8006/health | 200 |
| ODM API | http://localhost:8001/health | 200 |
| OpenObserve | http://localhost:5080/healthz | 200 |
Cube Query Smoke Test
curl -s http://localhost:4000/cubejs-api/v1/load \
-H "Content-Type: application/json" \
-d '{"query":{"measures":["Studies.count"]}}' | jq .
Database Table Counts
docker compose --env-file .env.production exec lakehouse-db \
psql -U ctms_user -d ctms_dlh -c \
"SELECT schemaname, count(*) FROM pg_tables WHERE schemaname IN ('bronze','silver','gold') GROUP BY 1 ORDER BY 1;"
4. Production App URLs
Replace <EC2_PUBLIC_IP> with the value of EC2_PUBLIC_IP from your .env.production.
| Service | Port | URL |
|---|---|---|
| Zynexa (CTMS App) | 3000 | http://<EC2_PUBLIC_IP>:3000 |
| Sublink | 3001 | http://<EC2_PUBLIC_IP>:3001 |
| KrakenD API Gateway | 9080 | http://<EC2_PUBLIC_IP>:9080 |
| Cube.js (Semantic Layer) | 4000 | http://<EC2_PUBLIC_IP>:4000 |
| MCP Server (AI Agent) | 8006 | http://<EC2_PUBLIC_IP>:8006 |
| ODM API | 8001 | http://<EC2_PUBLIC_IP>:8001 |
| OpenObserve | 5080 | http://<EC2_PUBLIC_IP>:5080 |
| Lakehouse DB | 5433 | <EC2_PUBLIC_IP>:5433 |
Credentials for each service come from the corresponding env vars in .env.production (CUBEJS_API_SECRET, OPENOBSERVE_ROOT_EMAIL / PASSWORD, etc.).
5. Data Pipeline: Ingester and dbt
The data pipeline ingests data from Frappe Cloud into the analytics database. Before running it, ensure your Frappe instance has been populated with sample clinical data — e.g. Studies, Sites, Subjects, Practitioners, Vital Signs, Drug Prescriptions, etc. Without this data, the pipeline will produce empty tables and the dashboards/analytics will have nothing to display.
After verifying the platform is running, populate the analytics database by running the data pipeline.
Frappe Cloud --> Ingester (DLT) --> Lakehouse DB (PostgreSQL) <-- dbt
Bronze layer Bronze, Silver, Gold 197 tests
For detailed documentation on each stage, see Data Pipeline — Ingester and Data Pipeline — dbt.
5.1 Start Lakehouse DB and Run Pipeline
# Start DB and wait for healthy
docker compose --env-file .env.production --profile lakehouse up -d lakehouse-db
# Run Ingester: Bronze (~46 tables)
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester
# Run dbt: Silver + Gold (bronze ~63, silver ~7, gold ~28 tables)
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt daily
The daily command runs: dbt deps, dbt build, dbt run --select elementary.
Expected dbt output: PASS=197 WARN=5 ERROR=0
5.2 Remote Database Alternative
TARGET_DB_HOST=your-db-host.example.com
TARGET_DB_PORT=5432
TARGET_DB_SSLMODE=require
5.3 Schedule with Cron
0 2 * * * cd /opt/ctms-deployment && \
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester && \
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt daily \
>> /var/log/ctms-pipeline.log 2>&1
6. External / Vendor Service Dashboards
Supabase (Auth Provider)
Supabase can run as a cloud instance or self-hosted via the supabase/ folder in ctms.devops:
| Deployment | Dashboard URL | SUPABASE_URL |
|---|---|---|
| Cloud | https://supabase.com/dashboard/project/<ref> | https://<ref>.supabase.co |
| Self-hosted | http://localhost:8000 (Studio) | http://localhost:8000 |
| Field | Env Var |
|---|---|
| Project URL | SUPABASE_URL |
| Anon Key | SUPABASE_ANON_KEY |
| Service Key | SUPABASE_KEY |
| DB Credentials | ctms_user / ctms_pwd (self-hosted) |
Self-hosted Supabase ships with CTMS-specific init scripts: profiles table, handle_new_user() trigger, devices, medication_consumption_logs, notification_logs, and get_medication_status(). These are applied by the ctms-supabase-seed container — see Self-Hosted Vendor Stacks for the full setup guide.
Frappe (Clinical Data Backend)
Frappe can run as Frappe Cloud or self-hosted via the frappe-marley-health/ folder in ctms.devops. For self-hosted Frappe, the setup service automates wizard completion, admin user creation, and API token generation — see Self-Hosted Vendor Stacks.
| Deployment | Dashboard URL | FRAPPE_URL |
|---|---|---|
| Cloud | Frappe Cloud dashboard | https://<site>.frappe.cloud |
| Self-hosted | http://localhost:8080/app | http://localhost:8080 |
| Field | Env Var |
|---|---|
| Site URL | FRAPPE_URL |
| API Token | FRAPPE_API_TOKEN |
| Dashboard | <FRAPPE_URL>/app |
To regenerate the API token on a self-hosted instance:
docker exec -w /home/frappe/frappe-bench frappe-marley-health-backend-1 \
bash -c 'source env/bin/activate && python3 /setup/frappe-generate-token.py'
OpenAI (AI / MCP Server)
| Field | Env Var |
|---|---|
| API Key | OPENAI_API_KEY |
| Model | OPENAI_MODEL |
All values are configured in .env.production. See Installation for the full list.
7. Docker Network Architecture
All three stacks — Supabase, Frappe, and CTMS core — share a single Docker bridge network called ctms-network. This allows every container to discover and communicate with any other container by name, without port mapping.
Shared Network: ctms-network
┌─────────────────────────── ctms-network (bridge) ──────────────────────────────┐
│ │
│ ┌─ Supabase ──────────┐ ┌─ Frappe ──────────┐ ┌─ CTMS Core ─────────────┐ │
│ │ supabase-kong :8000 │ │ frontend :8080 │ │ caddy, api-gateway │ │
│ │ supabase-db :5432 │ │ backend :8000 │ │ zynexa, sublink │ │
│ │ supabase-auth :9999 │ │ db (MariaDB) │ │ cube, mcp-server │ │
│ │ supabase-rest :3000 │ │ redis-cache/queue │ │ lakehouse-db, odm-api │ │
│ │ supabase-realtime │ │ websocket │ │ otel, openobserve │ │
│ │ supabase-storage │ │ scheduler │ │ │ │
│ │ supabase-studio │ │ queue-short/long │ │ │ │
│ │ ...13 services │ │ ...11 services │ │ ...16+ services │ │
│ └──────────────────────┘ └───────────────────┘ └─────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────────────────────┘
How It Works
The network is defined as name: ctms-network with driver: bridge in all three docker-compose.yml files. Whichever stack starts first creates the network; the remaining stacks reuse it.
Recommended startup order:
# 1. Supabase (auth provider – needed by CTMS apps)
cd supabase && make up
# 2. Frappe (clinical data backend – needed by CTMS apps)
cd frappe-marley-health && make up
# 3. CTMS core services
docker compose --env-file .env.production --profile all up -d
Cross-Stack Container Discovery
Because all containers share one network, they can reach each other directly:
| From | To | Internal URL |
|---|---|---|
| Zynexa / KrakenD | Supabase API | http://supabase-kong:8000 |
| Zynexa / KrakenD | Frappe | http://frontend:8080 |
| Supabase Edge Functions | Frappe API | http://frontend:8080 |
| Ingester | Frappe (via Caddy) | https://api.localhost |
| Any container | Supabase DB | postgresql://ctms_user:ctms_pwd@supabase-db:5432/postgres |
| Any container | Lakehouse DB | postgresql://ctms_user:ctms_pwd@lakehouse-db:5432/ctms_dlh |
Network Commands
# Inspect the shared network
docker network inspect ctms-network
# List containers on the network
docker network inspect ctms-network --format '{{range .Containers}}{{.Name}} {{end}}'
# Create the network manually (not normally needed)
docker network create --driver bridge ctms-network
# Check if the network exists
docker network ls | grep ctms-network
Stopping Stacks
When stopping, note that the last stack to stop removes the shared network. If another stack is still running, docker compose down will skip network removal automatically.
# Stop in reverse order (CTMS → Frappe → Supabase)
docker compose --env-file .env.production down
cd frappe-marley-health && make down
cd supabase && make down
8. Database Connection (Lakehouse)
| Field | Env Var | Default |
|---|---|---|
| Host (external) | EC2_PUBLIC_IP or localhost | — |
| Host (Docker internal) | — | lakehouse-db |
| Port (external) | LAKEHOUSE_DB_PORT | 5433 |
| Port (internal) | — | 5432 |
| Database | TARGET_DB_NAME | ctms_dlh |
| User | TARGET_DB_USER | ctms_user |
| Password | TARGET_DB_PASSWORD | (set in .env.production) |
Connection string pattern:
postgresql://<TARGET_DB_USER>:<TARGET_DB_PASSWORD>@<HOST>:<PORT>/<TARGET_DB_NAME>
# From your machine (EC2)
psql -h <EC2_PUBLIC_IP> -p 5433 -U <TARGET_DB_USER> -d <TARGET_DB_NAME>
# From your machine (local Docker)
psql -h localhost -p 5433 -U <TARGET_DB_USER> -d <TARGET_DB_NAME>
# From inside Docker network
psql -h lakehouse-db -p 5432 -U <TARGET_DB_USER> -d <TARGET_DB_NAME>
9. Local Development URLs
Add to /etc/hosts:
127.0.0.1 api.localhost zynexa.localhost sublink.localhost cube.localhost mcp.localhost odm.localhost observe.localhost
| Service | URL |
|---|---|
| Zynexa | https://zynexa.localhost |
| Sublink | https://sublink.localhost |
| API Gateway | https://api.localhost |
| Cube.js | https://cube.localhost |
| MCP Server | https://mcp.localhost |
| ODM API | https://odm.localhost |
| OpenObserve | https://observe.localhost |
| Lakehouse DB | localhost:5433 |
10. Operations
# Logs
docker compose --env-file .env.production logs -f cube
docker compose --env-file .env.production logs --tail 100 zynexa
# Restart a service
docker compose --env-file .env.production restart cube
# Stop all
docker compose --env-file .env.production --profile analytics --profile lakehouse down
# Update (pull + recreate)
docker compose --env-file .env.production --profile analytics --profile lakehouse pull
docker compose --env-file .env.production --profile analytics --profile lakehouse up -d
# Pull latest KrakenD image + restart
docker compose --env-file .env.production pull api-gateway && \
docker compose --env-file .env.production up -d api-gateway
# Clean restart (WARNING: destroys data)
docker compose --env-file .env.production --profile analytics --profile lakehouse down -v
11. Quick Start Checklist
- Complete System Requirements, Installation, and Initial Setup
- Start platform:
docker compose --env-file .env.production --profile analytics up -d - Verify health endpoints (Section 3)
- Start lakehouse:
docker compose --env-file .env.production --profile lakehouse up -d lakehouse-db - Run ingester:
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester - Run dbt:
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt daily - Log in to Zynexa
12. Troubleshooting
| Problem | Cause | Fix |
|---|---|---|
| Cube unhealthy | Gold tables missing | Run ingester + dbt (section 5) |
| MCP can't reach Cube | Wrong CUBE_API_URL | Must be http://cube:4000/cubejs-api/v1 |
| Cubestore crash (ARM) | amd64-only image | Set CUBEJS_DEV_MODE=true, unset CUBEJS_CUBESTORE_HOST |
| Duplicate CORS headers | Caddy + backend both add header | Caddy handle @options handles preflight; backends should not add own CORS |
| Ingester can't reach Frappe | FRAPPE_BASE_URL unresolvable | Needs Caddy + KrakenD running; uses extra_hosts: api.localhost:host-gateway |
| Continue Wait on queries | Cubestore unreachable | Start cubestore or comment out CUBEJS_CUBESTORE_HOST |
Architecture
+----------------------+
| Frappe Cloud |
+----------+-----------+
+-----------------+------------------+
v v v
+----------+ +------------+ +------------+
| Caddy | | Ingester | | ctms-init |
|(Rev Proxy)| |(DLT>Bronze)| |(Provision) |
+----+-----+ +-----+------+ +------------+
| |
+----+-----+ +------v-----+
| KrakenD | |Lakehouse DB|<-- dbt (Bronze>Silver>Gold)
|(API GW) | |(PostgreSQL)|
+----+-----+ +------+-----+
| |
+------+---+---+ +-----+------+
v v v | Cube.js |-- Cubestore
Zynexa Sublink ODM |(Semantic) |
+-----+------+
v
+------------+ +----------+
| MCP Server | | Supabase |
|(AI Agent) | | (Auth) |
+------------+ +----------+