Skip to main content

Bundle Deployment

Recommended Approach

This is the recommended and supported way to deploy the CTMS platform. A single self-contained bundle — no GitHub token, no git clone, no manual Docker builds.

SERVER_HOST=<your-ip> ./zynctl.sh full-deploy
Supported Operating Systems

The zynctl.sh script is written and tested for RHEL-family Linux distributions:

  • Rocky Linux 9 / 10
  • AlmaLinux 9
  • Amazon Linux 2023
  • RHEL 9 / CentOS Stream 9

The bootstrap step installs Docker and system dependencies using dnf. Other distributions (Ubuntu, Debian) are not currently supported by the automated bootstrap — you would need to install Docker manually first.


Prerequisites

Before you begin, make sure you have the following ready:

RequirementDetails
VM / Server IPA publicly reachable IP address for your server. You will configure this as SERVER_HOST.
OSRocky Linux 9/10, AlmaLinux 9, Amazon Linux 2023, or RHEL 9
RAM8 GB minimum (16 GB recommended)
CPU4 vCPU minimum (8 vCPU recommended)
Disk50 GB free
Root AccessSSH root or sudo access to the server
NetworkOutbound internet for Docker Hub image pulls
Docker Hub AccountRequired — to avoid pull rate limits (anonymous: 100 pulls/6h). Sign up free →
Docker Hub Rate Limits

Without Docker Hub credentials, anonymous pulls are limited to 100 per 6 hours. A team of 4 sharing a server IP will hit this limit quickly. Always configure DOCKER_USERNAME and DOCKER_PASSWORD in zynctl.conf before deploying.

Ports Used

The following ports must be free on the server:

PortService
3000Zynexa (CTMS App)
3001Sublink (Mobile)
4000Cube.js (Analytics)
5080OpenObserve
8000Supabase Studio
8001ODM API
8006MCP Server (AI)
8080Frappe
9080API Gateway (KrakenD)

Quick Start

1. Download the bundle

From GitHub Releases, download the latest zynctl-bundle-*.tar.gz.

# On the server (or SCP from your local machine)
curl -LO https://github.com/zynomilabs/ctms.devops/releases/download/bundle-v<version>/zynctl-bundle-<version>.tar.gz

2. Install tar (if not already present)

sudo dnf install -y tar

3. Extract and configure

tar xzf zynctl-bundle-<version>.tar.gz
cd zynctl-bundle-<version>

# Create config
cp zynctl.conf.example zynctl.conf

Edit zynctl.conf with your server IP and Docker Hub credentials:

# ── Required ──────────────────────────────────
SERVER_HOST=203.0.113.50 # ← Your VM's public IP

# ── Docker Hub (required to avoid rate limits) ─
DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password

4. Deploy

# One command: bootstrap + deploy
./zynctl.sh full-deploy

Or step-by-step:

./zynctl.sh bootstrap    # Install Docker, Compose, system tuning
exit # Re-login for Docker group membership
ssh root@<server-ip>
cd zynctl-bundle-<version>
./zynctl.sh deploy # Full 12-step deploy

5. Verify

./zynctl.sh health

Deployment takes ~10–15 minutes. All Docker images are pre-built — nothing compiles on the server.


What deploy Does

StepAction~Time
1Copy bundle to /opt/ctms-deployment5s
2Create .env.production from template, patch IP, generate secrets5s
3Create ctms-network, authenticate Docker Hub2s
4Start Supabase (13 services), extract service role key60s
5Seed CTMS tables into Supabase30s
6Start Frappe, wait for site creation and backend API readiness120s
7Generate Frappe API token, create UOM "Unit"10s
8Run CTMS Init (DocTypes → Custom Fields → RBAC → Master Data → Practitioner)60s
9Pull all CTMS Docker images60s
10Start CTMS platform (core + analytics + observability)30s
11Health-check 7 endpoints, seed 4 demo users30s
12Print service URLs and credentials

Commands Reference

Setup

CommandDescription
./zynctl.sh bootstrapInstall Docker, Compose, tune system
./zynctl.sh deployFull first-time deployment
./zynctl.sh full-deployBootstrap + Deploy in one shot
./zynctl.sh resume-deployResume from Step 7 (when Supabase + Frappe are already running)

Operations

CommandDescription
./zynctl.sh statusShow running containers
./zynctl.sh healthCheck all service endpoints
./zynctl.sh seed-usersSeed demo users (idempotent)
./zynctl.sh logs [service]View container logs
./zynctl.sh stopStop all stacks (reverse order)
./zynctl.sh restartRestart CTMS core services
./zynctl.sh updatePull latest images + restart
./zynctl.sh destroyRemove everything including data (irreversible)

Diagnostics

CommandDescription
./zynctl.sh env-checkValidate .env.production for placeholders
./zynctl.sh infoShow version, config, and system info

Services After Deployment

ServiceURLPurpose
Zynexahttp://<IP>:3000Main CTMS application
Sublinkhttp://<IP>:3001Mobile companion app
API Gatewayhttp://<IP>:9080KrakenD gateway
Cube.jshttp://<IP>:4000Analytics semantic layer
MCP Serverhttp://<IP>:8006AI integration
ODM APIhttp://<IP>:8001ODM document service
OpenObservehttp://<IP>:5080Observability
Supabase Studiohttp://<IP>:8000Database management
Frappehttp://<IP>:8080ERP / DocType engine

Default Credentials

AccountUsernamePassword
Demo — Platform Adminkiran.v@zynomi.com●●●●●●
Demo — Study Coordinatormichael.x@zynomi.com●●●●●●
Demo — Study Designerroshini.s@zynomi.com●●●●●●
Demo — Principal Investigatorpeter.p@zynomi.com●●●●●●
Frappe AdminAdministrator●●●●●●
OpenObserveadmin@ctms.local●●●●●●
info

All demo users share the same default password. Run ./zynctl.sh seed-users to see credentials printed in the terminal, or check the deploy log output.


Configuration

zynctl.conf

# Server IP (required — or pass via SERVER_HOST env var)
SERVER_HOST=203.0.113.50

# Deploy directory (default: /opt/ctms-deployment)
# DEPLOY_PATH=/opt/ctms-deployment

# Docker Hub credentials (required to avoid rate limits)
DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password

All settings can also be passed as environment variables:

SERVER_HOST=1.2.3.4 DOCKER_USERNAME=myuser DOCKER_PASSWORD=mypass ./zynctl.sh deploy

Auto-Patched Variables

These are set automatically during deploy — no manual editing needed:

VariableSource
EC2_PUBLIC_IPFrom SERVER_HOST
NEXTAUTH_URLhttp://<SERVER_HOST>:3000
NEXTAUTH_SECRETAuto-generated (openssl rand)
SUPABASE_SERVICE_ROLE_KEYExtracted from Supabase config
FRAPPE_API_TOKENGenerated via Frappe backend

Optional (post-deploy)

Edit /opt/ctms-deployment/.env.production to set:

VariablePurpose
OPENAI_API_KEYMCP AI chat features
ACME_EMAILLet's Encrypt HTTPS (when using a domain)

Recovery: resume-deploy

If the deploy dies mid-way (e.g., network timeout, Docker rate limit) but Supabase and Frappe are already running:

cd /opt/ctms-deployment
./zynctl.sh resume-deploy

This picks up from Step 7 (token generation) through Step 12 (health checks + summary). It verifies Supabase and Frappe containers are running before proceeding.


Day-2 Operations

Update images

./zynctl.sh update

Pulls latest zynomi/* images and recreates containers. Data volumes are preserved.

Update to new bundle version

tar xzf zynctl-bundle-<new-version>.tar.gz
cd zynctl-bundle-<new-version>
SERVER_HOST=<ip> ./zynctl.sh deploy

Restart a single service

cd /opt/ctms-deployment
DC="docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production"
$DC --profile analytics up -d cube
warning

Use $DC up -d <service> (recreate) not docker compose restart — restart does not pick up .env.production changes.


Bundle Contents

zynctl-bundle-<version>/
├── zynctl.sh # Platform controller
├── zynctl.conf.example # Config template (Docker Hub creds included)
├── VERSION
├── .env.example # Environment template
├── docker-compose.yml # Base compose stack
├── docker-compose.prod.yml # Production overlay (IP-based ports)
├── caddy/ # Reverse proxy configs
├── config/ # OTEL, Fluent Bit configs
├── ctms-api-gateway/configs/ # KrakenD API gateway
├── scripts/frappe-setup/ # Frappe setup scripts (entrypoint, seed, token gen)
├── supabase/ # Supabase stack (13 services)
└── frappe-marley-health/ # Frappe stack

Troubleshooting

Docker Hub rate limit

Error: You have reached your unauthenticated pull rate limit

Set Docker Hub credentials in zynctl.conf or login manually:

docker login -u <your-username>

Then resume:

./zynctl.sh resume-deploy

Frappe API token not detected

docker exec frappe-marley-health-backend-1 bench --site frontend execute \
frappe.core.doctype.user.user.generate_keys --args "['Administrator']"

Patch the output into .env.production:

sed -i "s|FRAPPE_API_TOKEN=.*|FRAPPE_API_TOKEN=<key>:<secret>|" \
/opt/ctms-deployment/.env.production /opt/ctms-deployment/.env

Docker group membership after bootstrap

exit
ssh root@<server-ip> # Re-login picks up group change

Or use ./zynctl.sh full-deploy which handles this automatically.

Health check shows ❌ but service works

On Rocky Linux 10, Docker health checks may resolve localhost to IPv6 (::1) while services listen on IPv4. The service works fine — zynctl.sh health uses 127.0.0.1 to avoid this.

Re-run ctms-init after a failure

If ctms-init exited with an error (e.g., Healthcare Practitioner Stage 5 fails with 417 EXPECTATION FAILED), re-run it:

cd /opt/ctms-deployment

docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.production --profile init run --rm ctms-init

All stages are idempotent — completed stages skip automatically, only the failed stage retries.