Skip to main content

Self-Hosted Vendor Stacks (On-Prem)

By default, the CTMS platform connects to Supabase Cloud (authentication) and Frappe Cloud (clinical backend). For on-premises deployments where cloud connectivity is not available or not desired, both vendor stacks can be self-hosted using Docker Compose.

When Is This Needed?

Self-hosting is optional. You only need this guide if:

  • Your deployment environment has no internet access
  • Regulatory or data residency requirements prohibit cloud services
  • You want a fully self-contained local development environment

If you use Supabase Cloud + Frappe Cloud, skip this page entirely — proceed to Initial Setup & Configuration.


Architecture Overview

All three stacks — Supabase, Frappe, and CTMS core — share a single Docker bridge network called ctms-network, enabling container-to-container communication by name.

┌──────────────────────── ctms-network (bridge) ─────────────────────────────┐
│ │
│ ┌─ Supabase (13 svc) ──┐ ┌─ Frappe (12 svc) ──┐ ┌─ CTMS Core ──────┐ │
│ │ kong, db, auth, │ │ frontend, backend, │ │ caddy, krakend, │ │
│ │ rest, realtime, │ │ db (MariaDB), │ │ zynexa, sublink, │ │
│ │ storage, studio, │ │ redis, websocket, │ │ cube, mcp, odm, │ │
│ │ analytics, vector, │ │ scheduler, queues, │ │ lakehouse, otel │ │
│ │ edge-functions, ... │ │ setup (init) │ │ │ │
│ └───────────────────────┘ └─────────────────────┘ └──────────────────┘ │
│ │
│ ┌─ CTMS Init Containers (one-shot, always required) ─────────────────────┐│
│ │ ctms-supabase-seed → CTMS tables in Supabase (profiles, devices, ...) ││
│ │ ctms-init → Frappe DocTypes, RBAC, seed data (5 stages) ││
│ └────────────────────────────────────────────────────────────────────────┘│
└────────────────────────────────────────────────────────────────────────────┘

Startup Order

The vendor stacks must be started before CTMS core services:

1. Supabase (creates ctms-network)
2. Frappe (joins ctms-network; setup service generates API token)
3. CTMS Init (supabase-seed + ctms-init provision custom data)
4. CTMS Core (zynexa, sublink, cube, etc.)

Separation of Concerns

ContainerScopeWhen to Run
Supabase stackVendor — runs the auth platform exactly as distributedOn-prem Supabase only
Frappe stackVendor — runs ERPNext/Healthcare exactly as distributedOn-prem Frappe only
Frappe setup serviceCTMS — completes wizard, creates admin, generates API tokenOn-prem Frappe only
ctms-supabase-seedCTMS — creates profiles, devices, medication_logs, notification_logs tables + RLSAlways (cloud or on-prem)
ctms-initCTMS — provisions 34 DocTypes, 15 custom fields, 539 RBAC records (incl. 4 Frappe native roles), 88 seed records, 1 practitionerAlways (cloud or on-prem)
Vendor Purity

The vendor Docker Compose files (supabase/docker-compose.yml and frappe-marley-health/docker-compose.yml) are kept unmodified from their upstream distributions. All CTMS-specific customizations are applied via separate init containers — this ensures clean vendor upgrades.


Prerequisites

  • Docker Engine 24+ and Docker Compose v2
  • Docker Buildx >= 0.10 — required to build the Frappe image (see below)
  • At least 8 GB RAM available for Docker (Supabase alone needs ~4 GB)
  • Ports available: 8000 (Supabase API), 8080 (Frappe), plus CTMS ports
Installing Docker Buildx on Linux Servers

Docker Desktop (macOS / Windows) includes Buildx by default. On Linux servers (e.g., Amazon Linux, Ubuntu), install it separately:

# Amazon Linux 2023 / RHEL / Fedora
BUILDX_VERSION=$(curl -s https://api.github.com/repos/docker/buildx/releases/latest | jq -r '.tag_name')
sudo mkdir -p /usr/local/lib/docker/cli-plugins
sudo curl -SL "https://github.com/docker/buildx/releases/download/${BUILDX_VERSION}/buildx-${BUILDX_VERSION}.linux-amd64" \
-o /usr/local/lib/docker/cli-plugins/docker-buildx
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-buildx

# Verify
docker buildx version

Buildx is required because the Frappe Dockerfile runs bench get-app during image build, which uses BuildKit features.


Step 1: Self-Hosted Supabase

1.1 Configure Environment

cd ctms.devops/supabase
cp .env.sample .env

Edit .env and change the default secrets:

VariableDefaultAction
POSTGRES_PASSWORDyour-super-secret-and-long-postgres-passwordChange in production
JWT_SECRETsuper-secret-jwt-token-with-at-least-32-characters-longChange in production
ANON_KEYPre-generated JWTRegenerate for production (guide)
SERVICE_ROLE_KEYPre-generated JWTRegenerate for production
DASHBOARD_USERNAMEctms_userDashboard login
DASHBOARD_PASSWORDctms_pwdDashboard login

1.2 Start Supabase

docker compose up -d

Wait for all 13 services to become healthy:

docker compose ps --format "table {{.Name}}\t{{.Status}}"

Expected: All services show Up (healthy) except realtime which may show Up (unhealthy) — this is a known Supabase self-hosted issue and does not affect functionality.

1.3 Seed CTMS Tables (Required for All Deployments)

note

The ctms-supabase-seed container is required for both cloud and self-hosted Supabase. It creates the CTMS-specific tables (profiles, devices, medication_logs, notification_logs) and RLS policies that the application depends on. For cloud Supabase, run this against your managed database URL.

After Supabase is healthy, run the CTMS seed container from the root ctms.devops directory:

cd ctms.devops
docker compose --env-file .env.local --profile init run --rm --build ctms-supabase-seed

This creates:

  • profiles table (linked to auth.users via trigger)
  • devices table
  • medication_consumption_logs table
  • notification_logs table
  • get_medication_status() function
  • Row-Level Security policies on all tables

Expected output: Total: 5 | OK: 5 | Failed: 0

1.4 Update Root Environment

Copy these values from supabase/.env into your root .env.local / .env.production:

# Point to on-prem Supabase (container name on ctms-network)
SUPABASE_URL=http://supabase-kong:8000
SUPABASE_KEY=<SERVICE_ROLE_KEY from supabase/.env>
SUPABASE_ANON_KEY=<ANON_KEY from supabase/.env>
DATABASE_URL=postgresql://postgres:<POSTGRES_PASSWORD>@supabase-db:5432/postgres

1.5 Verify

# Studio dashboard
open http://localhost:8000

# API health
curl -s http://localhost:8000/rest/v1/ \
-H "apikey: <ANON_KEY>" \
-H "Authorization: Bearer <ANON_KEY>"

Step 2: Self-Hosted Frappe

2.1 Configure Environment

cd ctms.devops/frappe-marley-health
cp .env.sample .env

Key variables:

VariableDefaultDescription
ADMIN_PASSWORDWelcome@1234Frappe Administrator password
ADMIN_EMAILadmin@myfrappe.comAdmin user email
COMPANY_NAMECTMSOrganization name for setup wizard
SITE_NAMEfrontendFrappe site name (usually keep as frontend)

2.2 Start Frappe

docker compose up -d

The startup sequence is:

  1. configurator — generates Frappe configuration files → exits
  2. create-site — creates the MariaDB database and site → exits
  3. setup — completes the setup wizard, creates admin user, generates API keys → exits
  4. backend, frontend, websocket, queues, scheduler — start and run

2.3 Retrieve the API Token

The setup service automatically generates an API token. Retrieve it:

# From the setup container logs
docker logs frappe-marley-health-setup-1 2>&1 | grep FRAPPE_API_TOKEN

# Or from the shared file
docker exec frappe-marley-health-backend-1 \
cat /home/frappe/frappe-bench/sites/api_token.txt

2.4 Update Root Environment

Copy the generated token into your root .env.local / .env.production:

# Point to on-prem Frappe (container name on ctms-network)
FRAPPE_URL=http://frappe-marley-health-frontend-1:8080
FRAPPE_API_TOKEN="<key>:<secret>"
FRAPPE_CLOUD_IMAGE_BASE_URL=http://frappe-marley-health-frontend-1:8080

2.5 Verify

# Frappe dashboard
open http://localhost:8080/app

# API health check using the generated token
curl -s -H "Authorization: token <key>:<secret>" \
http://localhost:8080/api/method/frappe.auth.get_logged_user
# Expected: {"message":"admin@myfrappe.com"}

Step 3: Regenerate API Token (CLI)

If you need to regenerate the Frappe API token at any time (e.g., after a secret rotation or security incident):

docker exec -w /home/frappe/frappe-bench frappe-marley-health-backend-1 \
bash -c 'source env/bin/activate && python3 /setup/frappe-generate-token.py'

Output:

FRAPPE_API_TOKEN="<new_key>:<new_secret>"

User: admin@myfrappe.com
Key: <api_key>
Secret: <new_secret>
File: /home/frappe/frappe-bench/sites/api_token.txt

After regeneration, update FRAPPE_API_TOKEN in your .env.local / .env.production and restart CTMS services that depend on it:

# Restart services that use FRAPPE_API_TOKEN
docker compose --env-file .env.local up -d zynexa api-gateway
warning

Token regeneration changes the API secret while preserving the API key. Any running ctms-init or other service using the old token will need the updated value.


Step 4: Run CTMS Init (Required for All Deployments)

After both vendor stacks are running and the API token is configured, provision Frappe with the CTMS data model:

cd ctms.devops
docker compose --env-file .env.local --profile init up ctms-init

This runs the 5-stage provisioning:

StageCreatesCount
1Custom DocTypes34
2Custom Fields on built-in DocTypes15
3RBAC data (Frappe native roles, CTMS roles, resources, permissions)539
4Master/seed data (lookups, drugs, lab items)88
5Default Healthcare Practitioner1
This Step Is Always Required

Both ctms-supabase-seed and ctms-init are required regardless of whether you use cloud or self-hosted vendor services. They create the CTMS data model (Supabase tables + Frappe DocTypes + default practitioner) that the application depends on.

After Stage 5 completes, copy the generated NEXT_PUBLIC_DEFAULT_PRACTITIONER_ID into your .env file.


Step 5: Start CTMS Core Services

cd ctms.devops
docker compose --env-file .env.local up -d

Verify cross-stack connectivity:

# Zynexa → Frappe
docker exec ctms-zynexa curl -sf http://frappe-marley-health-frontend-1:8080/api/method/frappe.auth.get_logged_user \
-H "Authorization: token <FRAPPE_API_TOKEN>"

# Zynexa → Supabase
docker exec ctms-zynexa curl -sf http://supabase-kong:8000/rest/v1/ \
-H "apikey: <ANON_KEY>"

Complete On-Prem Startup Sequence

Here is the full end-to-end sequence for a clean on-prem deployment:

cd ctms.devops

# 1. Start Supabase (creates ctms-network)
cd supabase && docker compose up -d && cd ..

# 2. Seed CTMS tables into Supabase (required for ALL deployments)
docker compose --env-file .env.local --profile init run --rm --build ctms-supabase-seed

# 3. Start Frappe (setup service auto-completes wizard + generates API token)
cd frappe-marley-health && docker compose up -d && cd ..

# 4. Retrieve the generated API token
docker logs frappe-marley-health-setup-1 2>&1 | grep FRAPPE_API_TOKEN
# → Update FRAPPE_API_TOKEN in .env.local

# 5. Provision Frappe with CTMS data model (5 stages)
docker compose --env-file .env.local --profile init up ctms-init
# → Copy NEXT_PUBLIC_DEFAULT_PRACTITIONER_ID from Stage 5 output into .env.local

# 6. Start CTMS core services
docker compose --env-file .env.local up -d

# 7. (Optional) Start analytics + observability
docker compose --env-file .env.local --profile analytics --profile observability up -d

Stopping (Reverse Order)

# CTMS core
docker compose --env-file .env.local down

# Frappe
cd frappe-marley-health && docker compose down && cd ..

# Supabase (last — owns ctms-network)
cd supabase && docker compose down && cd ..
Data Volumes

Use docker compose down (without -v) to preserve data. Adding -v destroys all database volumes and requires re-running the full setup sequence.


Troubleshooting

ProblemCauseFix
ctms-supabase-seed fails with auth.jwt() does not existGoTrue (auth service) not readyThe seed container waits automatically; if it still fails, ensure supabase-auth is healthy
ctms-init fails with connection refusedFrappe not ready or wrong tokenCheck FRAPPE_URL and FRAPPE_API_TOKEN in .env.local
User signup fails with LinkValidationError: Could not find RoleFrappe native Role entries missing (Stage 3 not run or outdated image)Re-run Stage 3: CTMS_INIT_STAGES=3 docker compose --env-file .env.local --profile init run --rm ctms-init
Supabase realtime shows unhealthyKnown self-hosted issue with tenant authNon-blocking — realtime works through Kong
Network ctms-network not foundSupabase not started firstStart Supabase before other stacks, or create manually: docker network create ctms-network
Frappe setup exits with code 1Missing log directoriesEnsure the setup service has the logs volume mounted