Skip to main content

Platform Runbook

This runbook provides frequently used commands for operating the CTMS platform. Commands are organized by task with both native Docker Compose commands and Make-based shortcuts.

Environment Selection

Most Make commands default to .env.production. Override with ENV=:

make up                         # uses .env.production (default)
make up ENV=.env.zynomi.prod # uses instance-specific env

Quick Reference

TaskMake ShortcutDocker Compose Equivalent
Start core servicesmake updocker compose --env-file .env.production up -d
Start all servicesmake up-alldocker compose --env-file .env.production --profile all up -d
Stop all servicesmake downdocker compose --profile all down
View service statusmake statusdocker compose --profile all ps
Health checkmake healthcurl per service (see below)
View all logsmake logsdocker compose --profile all logs -f
Pull latest imagesmake pulldocker compose pull
First-time setupmake setup— (hosts + pull)

1. Service Management (ctms.devops)

Starting Services

TaskMake CommandDocker Compose Command
Core only (Caddy, KrakenD, Zynexa, Sublink, ODM)make updocker compose --env-file .env.production up -d
Core + Observabilitymake up-obsdocker compose --env-file .env.production --profile core --profile observability up -d
Core + Analyticsmake up-analyticsdocker compose --env-file .env.production --profile core --profile analytics up -d
All servicesmake up-alldocker compose --env-file .env.production --profile all up -d

Stopping & Restarting

TaskMake CommandDocker Compose Command
Stop allmake downdocker compose --profile all down
Restart allmake restartdocker compose --env-file .env.production restart
Stop + remove volumes (destructive)make cleandocker compose --profile all down -v --remove-orphans

Instance-Specific Deployments

# Deploy for a specific instance
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.<instance>.prod --profile all up -d

# Examples
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.zynomi.prod --profile all up -d

Individual Service Management

restart vs up -d — Know the Difference

docker compose restart <service> only restarts the container process — it does NOT re-read .env.production or compose file changes. If you changed an environment variable (e.g., CUBEJS_DEV_MODE), you must use up -d to recreate the container with the new config:

# ❌ WRONG — env changes are NOT picked up
docker compose restart cube

# ✅ CORRECT — recreates container with latest env
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.production up -d cube

Rule of thumb: Use restart only for a quick process bounce. Use up -d after any config change.

The canonical compose command for production (IP-based, before DNS) is:

DC="docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production"
TaskStart / Recreate (picks up env changes)Quick Restart (no env reload)
API Gateway$DC up -d api-gateway$DC restart api-gateway
Zynexa$DC up -d zynexa$DC restart zynexa
Sublink$DC up -d sublink$DC restart sublink
Caddy (reload config)$DC exec caddy caddy reload --config /etc/caddy/Caddyfile
OpenObserve + OTEL$DC --profile observability up -d openobserve otel-collector$DC restart openobserve otel-collector
Cube.dev$DC --profile analytics up -d cube$DC restart cube
MCP Server$DC up -d mcp-server$DC restart mcp-server
ODM API$DC up -d odm-api$DC restart odm-api

Common Scenarios

# Set the canonical compose command
DC="docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production"

# Scenario: Changed CUBEJS_DEV_MODE in .env.production
# Only 'cube' needs recreating — cubestore is unused when DEV_MODE=true
$DC --profile analytics up -d cube

# Scenario: Changed NEXTAUTH_SECRET or NEXTAUTH_URL
$DC up -d zynexa

# Scenario: Changed FRAPPE_API_TOKEN
$DC up -d zynexa mcp-server odm-api

# Scenario: Changed EC2_PUBLIC_IP (affects RUNTIME_* vars in prod overlay)
$DC --profile all up -d

# Scenario: Quick restart after OOM or crash (no config changes)
$DC restart zynexa

2. Logs

TaskMake CommandDocker Compose Command
All servicesmake logsdocker compose --profile all logs -f
Caddymake caddy-logsdocker compose logs -f caddy
API Gatewaymake api-logsdocker compose logs -f api-gateway
Zynexamake zynexa-logsdocker compose logs -f zynexa
Sublinkmake sublink-logsdocker compose logs -f sublink
OpenObservemake openobserve-logsdocker compose logs -f openobserve
OTEL Collectormake otel-logsdocker compose logs -f otel-collector
Cube.devmake cube-logsdocker compose logs -f cube

Useful Log Filters

# Last 100 lines of a service
docker compose logs --tail 100 zynexa

# Logs since a specific time
docker compose logs --since 1h zynexa

# Logs for multiple services at once
docker compose logs -f zynexa api-gateway caddy

3. Shell Access

ContainerMake CommandDocker Compose Command
Caddymake caddy-shelldocker compose exec caddy sh
API Gatewaymake api-shelldocker compose exec api-gateway sh
Zynexamake zynexa-shelldocker compose exec zynexa sh
Sublinkmake sublink-shelldocker compose exec sublink sh
OpenObservemake openobserve-shelldocker compose exec openobserve sh
Cube.devmake cube-shelldocker compose exec cube sh

4. Health Checks

Make Command

make health

Manual Health Checks

ServiceCommandExpected
Caddycurl -s https://zynexa.localhost200
API Gatewaycurl -s https://api.localhost/__health{"status":"ok"}
Zynexacurl -s http://localhost:3000/api/health200
Sublinkcurl -s http://localhost:3001200
Cube.devcurl -s http://localhost:4000/readyz{"health":"HEALTH"}
OpenObservecurl -s http://localhost:5080/healthz200
ODM APIcurl -s http://localhost:8000/health200
MCP Servercurl -s http://localhost:8006/health200

Docker Health Status

# Check container health status
docker compose --profile all ps --format "table {{.Name}}\t{{.Status}}"

# Check a specific service
docker inspect --format='{{.State.Health.Status}}' ctms-zynexa

5. CTMS Init & Provisioning

TaskCommand
Run all 5 stagesdocker compose --env-file .env.production --profile init up ctms-init
Run specific stagesCTMS_INIT_STAGES=3,4 docker compose --env-file .env.production --profile init run --rm ctms-init
Dry runCTMS_INIT_DRY_RUN=true docker compose --env-file .env.production --profile init run --rm ctms-init
Seed Supabase tablesdocker compose --env-file .env.production --profile init run --rm ctms-supabase-seed

Make-Based (Python Scripts)

Requires one-time setup: make frappe-seed-setup

TaskMake Command
Full 5-stage provisioningmake frappe-provision ENV=../../.env.production
Stage 1: Create DocTypesmake frappe-setup-doctypes ENV=../../.env.production
Stage 2: Create Custom Fieldsmake frappe-setup-custom-fields ENV=../../.env.production
Stage 3: Seed RBACmake frappe-seed-rbac ENV=../../.env.production
Stage 4: Seed Master Datamake frappe-seed-master-data ENV=../../.env.production
Sync Permissionsmake frappe-sync-permissions ENV=../../.env.production
Dry run Custom Fieldsmake frappe-setup-custom-fields-dry-run ENV=../../.env.production

6. Data Pipeline (Lakehouse)

Docker Commands

TaskCommand
Start Lakehouse DBdocker compose --env-file .env.production --profile lakehouse up -d lakehouse-db
Run data ingestiondocker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester
Run dbt daily pipelinedocker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt dbt-daily
Run dbt full refreshdocker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt dbt-full-refresh
Purge Lakehouse schemasSee below

Purge Lakehouse Schemas (Reset)

docker exec ctms-lakehouse-db psql -U ctms_user -d ctms_dlh -c "
DROP SCHEMA IF EXISTS raw CASCADE;
DROP SCHEMA IF EXISTS raw_staging CASCADE;
DROP SCHEMA IF EXISTS bronze CASCADE;
DROP SCHEMA IF EXISTS bronze_staging CASCADE;
DROP SCHEMA IF EXISTS silver CASCADE;
DROP SCHEMA IF EXISTS gold CASCADE;
"

7. Vendor Stacks (Self-Hosted)

Supabase

TaskCommand
Startcd supabase && docker compose up -d
Stopcd supabase && docker compose down
Statuscd supabase && docker compose ps
Logscd supabase && docker compose logs -f

Frappe

TaskCommand
Startcd frappe-marley-health && docker compose up -d
Stopcd frappe-marley-health && docker compose down
Statuscd frappe-marley-health && docker compose ps
Get API tokendocker logs frappe-marley-health-setup-1 2>&1 | grep FRAPPE_API_TOKEN
Regenerate API tokendocker exec -w /home/frappe/frappe-bench frappe-marley-health-backend-1 bash -c 'source env/bin/activate && python3 /setup/frappe-generate-token.py'
Frappe shell (bench)docker exec -it frappe-marley-health-backend-1 bash

Frappe Migration Tools

TaskMake Command
Setup migration venvmake frappe-migration-setup
Compare DocTypes (all)make frappe-compare
Compare Custom module onlymake frappe-compare-custom
Compare Custom Fieldsmake frappe-compare-custom-fields
List source DocTypesmake frappe-list-source
List target DocTypesmake frappe-list-target

8. Web Application (hb-life-science-web)

Development

TaskMake CommandNative Command
Install dependenciesmake setupbun install
Start dev servermake devbun run dev
Build for productionmake buildbun run build
Start production servermake startbun run start
Lint codemake lintbun run lint
View configurationmake status

Vercel Deployment

TaskMake CommandNative Command
Link to Vercel projectmake vercel-linkvercel link
Deploy previewmake deployvercel
Deploy productionmake deploy-prodvercel --prod
Pull env variablesmake vercel-env-pullvercel env pull .env.local
Add env variablemake vercel-env-add KEY=name VALUE=valuevercel env add
View logsmake vercel-logsvercel logs

API Generation

TaskCommand
Generate entity APIsbun run generate:apis
Generate OpenAPI specbun run openapi:generate
Validate OpenAPI specbun run openapi:validate
Export Postman collectionbun run openapi:postman

9. Common Workflows

Full Platform Start (On-Prem)

cd ctms.devops

# 1. Start Supabase (creates ctms-network)
cd supabase && docker compose up -d && cd ..

# 2. Seed CTMS tables into Supabase
docker compose --env-file .env.local --profile init run --rm ctms-supabase-seed

# 3. Start Frappe (setup auto-completes wizard + generates API token)
cd frappe-marley-health && docker compose up -d && cd ..

# 4. Retrieve the generated API token → update .env.local
docker logs frappe-marley-health-setup-1 2>&1 | grep FRAPPE_API_TOKEN

# 5. Provision Frappe with CTMS data model (5 stages)
docker compose --env-file .env.local --profile init up ctms-init

# 6. Start CTMS core services
make up # or: docker compose --env-file .env.local up -d

# 7. (Optional) Start analytics + observability
make up-all # or: docker compose --env-file .env.local --profile all up -d

Daily Data Refresh

cd ctms.devops

# 1. Extract data from Frappe API → Lakehouse
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-ingester

# 2. Transform through dbt layers (Bronze → Silver → Gold)
docker compose --env-file .env.production --profile lakehouse run --rm lakehouse-dbt dbt-daily

# 3. Clear Cube.dev analytics cache
make cube-restart

Full Platform Stop (On-Prem, Reverse Order)

cd ctms.devops

# CTMS core
make down

# Frappe
cd frappe-marley-health && docker compose down && cd ..

# Supabase (last — owns ctms-network)
cd supabase && docker compose down && cd ..

10. URLs Reference

After make up (Core)

ServiceURL
Zynexahttps://zynexa.localhost
Sublinkhttps://sublink.localhost
API Gatewayhttps://api.localhost
ODM APIhttps://odm.localhost

After make up-obs (+ Observability)

ServiceURL
OpenObservehttps://observe.localhost

After make up-analytics (+ Analytics)

ServiceURL
Cube.dev Playgroundhttps://cube.localhost
MCP Serverhttps://mcp.localhost

Vendor Stacks (Self-Hosted)

ServiceURL
Supabase Studiohttp://localhost:8000
Frappe Dashboardhttp://localhost:8080/app

11. Environment Files

FilePurpose
.env.exampleTemplate (commit-safe, checked into Git)
.env.productionDefault production configuration
.env.<instance>.prodInstance-specific production (e.g., .env.zynomi.prod)
.env.localLocal development (gitignored)

Getting Help

Each component's Makefile includes built-in help:

cd <component-directory>
make help

This displays all available commands with descriptions.