Bundle Deployment
This is the recommended and supported way to deploy the CTMS platform. A single self-contained bundle — no GitHub token, no git clone, no manual Docker builds.
SERVER_HOST=<your-ip> ./zynctl.sh full-deploy
The zynctl.sh script is written and tested for RHEL-family Linux distributions:
- Rocky Linux 9 / 10
- AlmaLinux 9
- Amazon Linux 2023
- RHEL 9 / CentOS Stream 9
The bootstrap step installs Docker and system dependencies using dnf. Other distributions (Ubuntu, Debian) are not currently supported by the automated bootstrap — you would need to install Docker manually first.
Prerequisites
Before you begin, make sure you have the following ready:
| Requirement | Details |
|---|---|
| VM / Server IP | A publicly reachable IP address for your server. You will configure this as SERVER_HOST. |
| OS | Rocky Linux 9/10, AlmaLinux 9, Amazon Linux 2023, or RHEL 9 |
| RAM | 8 GB minimum (16 GB recommended) |
| CPU | 4 vCPU minimum (8 vCPU recommended) |
| Disk | 50 GB free |
| Root Access | SSH root or sudo access to the server |
| Network | Outbound internet for Docker Hub image pulls |
| Docker Hub Account | Required — to avoid pull rate limits (anonymous: 100 pulls/6h). Sign up free → |
Without Docker Hub credentials, anonymous pulls are limited to 100 per 6 hours. A team of 4 sharing a server IP will hit this limit quickly. Always configure DOCKER_USERNAME and DOCKER_PASSWORD in zynctl.conf before deploying.
Ports Used
The following ports must be free on the server:
| Port | Service |
|---|---|
| 3000 | Zynexa (CTMS App) |
| 3001 | Sublink (Mobile) |
| 4000 | Cube.js (Analytics) |
| 5080 | OpenObserve |
| 8000 | Supabase Studio |
| 8001 | ODM API |
| 8006 | MCP Server (AI) |
| 8080 | Frappe |
| 9080 | API Gateway (KrakenD) |
Quick Start
1. Download the bundle
From GitHub Releases, download the latest zynctl-bundle-*.tar.gz.
# On the server (or SCP from your local machine)
curl -LO https://github.com/zynomilabs/ctms.devops/releases/download/bundle-v<version>/zynctl-bundle-<version>.tar.gz
2. Install tar (if not already present)
sudo dnf install -y tar
3. Extract and configure
tar xzf zynctl-bundle-<version>.tar.gz
cd zynctl-bundle-<version>
# Create config
cp zynctl.conf.example zynctl.conf
Edit zynctl.conf with your server IP and Docker Hub credentials:
# ── Required ──────────────────────────────────
SERVER_HOST=203.0.113.50 # ← Your VM's public IP
# ── Docker Hub (required to avoid rate limits) ─
DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password
4. Deploy
# One command: bootstrap + deploy
./zynctl.sh full-deploy
Or step-by-step:
./zynctl.sh bootstrap # Install Docker, Compose, system tuning
exit # Re-login for Docker group membership
ssh root@<server-ip>
cd zynctl-bundle-<version>
./zynctl.sh deploy # Full 12-step deploy
5. Verify
./zynctl.sh health
Deployment takes ~10–15 minutes. All Docker images are pre-built — nothing compiles on the server.
What deploy Does
| Step | Action | ~Time |
|---|---|---|
| 1 | Copy bundle to /opt/ctms-deployment | 5s |
| 2 | Create .env.production from template, patch IP, generate secrets | 5s |
| 3 | Create ctms-network, authenticate Docker Hub | 2s |
| 4 | Start Supabase (13 services), extract service role key | 60s |
| 5 | Seed CTMS tables into Supabase | 30s |
| 6 | Start Frappe, wait for site creation and backend API readiness | 120s |
| 7 | Generate Frappe API token, create UOM "Unit" | 10s |
| 8 | Run CTMS Init (DocTypes → Custom Fields → RBAC → Master Data → Practitioner) | 60s |
| 9 | Pull all CTMS Docker images | 60s |
| 10 | Start CTMS platform (core + analytics + observability) | 30s |
| 11 | Health-check 7 endpoints, seed 4 demo users | 30s |
| 12 | Print service URLs and credentials | — |
Commands Reference
Setup
| Command | Description |
|---|---|
./zynctl.sh bootstrap | Install Docker, Compose, tune system |
./zynctl.sh deploy | Full first-time deployment |
./zynctl.sh full-deploy | Bootstrap + Deploy in one shot |
./zynctl.sh resume-deploy | Resume from Step 7 (when Supabase + Frappe are already running) |
Operations
| Command | Description |
|---|---|
./zynctl.sh status | Show running containers |
./zynctl.sh health | Check all service endpoints |
./zynctl.sh seed-users | Seed demo users (idempotent) |
./zynctl.sh logs [service] | View container logs |
./zynctl.sh stop | Stop all stacks (reverse order) |
./zynctl.sh restart | Restart CTMS core services |
./zynctl.sh update | Pull latest images + restart |
./zynctl.sh destroy | Remove everything including data (irreversible) |
Diagnostics
| Command | Description |
|---|---|
./zynctl.sh env-check | Validate .env.production for placeholders |
./zynctl.sh info | Show version, config, and system info |
Services After Deployment
| Service | URL | Purpose |
|---|---|---|
| Zynexa | http://<IP>:3000 | Main CTMS application |
| Sublink | http://<IP>:3001 | Mobile companion app |
| API Gateway | http://<IP>:9080 | KrakenD gateway |
| Cube.js | http://<IP>:4000 | Analytics semantic layer |
| MCP Server | http://<IP>:8006 | AI integration |
| ODM API | http://<IP>:8001 | ODM document service |
| OpenObserve | http://<IP>:5080 | Observability |
| Supabase Studio | http://<IP>:8000 | Database management |
| Frappe | http://<IP>:8080 | ERP / DocType engine |
Default Credentials
| Account | Username | Password |
|---|---|---|
| Demo — Platform Admin | kiran.v@zynomi.com | ●●●●●● |
| Demo — Study Coordinator | michael.x@zynomi.com | ●●●●●● |
| Demo — Study Designer | roshini.s@zynomi.com | ●●●●●● |
| Demo — Principal Investigator | peter.p@zynomi.com | ●●●●●● |
| Frappe Admin | Administrator | ●●●●●● |
| OpenObserve | admin@ctms.local | ●●●●●● |
All demo users share the same default password. Run ./zynctl.sh seed-users to see credentials printed in the terminal, or check the deploy log output.
Configuration
zynctl.conf
# Server IP (required — or pass via SERVER_HOST env var)
SERVER_HOST=203.0.113.50
# Deploy directory (default: /opt/ctms-deployment)
# DEPLOY_PATH=/opt/ctms-deployment
# Docker Hub credentials (required to avoid rate limits)
DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password
All settings can also be passed as environment variables:
SERVER_HOST=1.2.3.4 DOCKER_USERNAME=myuser DOCKER_PASSWORD=mypass ./zynctl.sh deploy
Auto-Patched Variables
These are set automatically during deploy — no manual editing needed:
| Variable | Source |
|---|---|
EC2_PUBLIC_IP | From SERVER_HOST |
NEXTAUTH_URL | http://<SERVER_HOST>:3000 |
NEXTAUTH_SECRET | Auto-generated (openssl rand) |
SUPABASE_SERVICE_ROLE_KEY | Extracted from Supabase config |
FRAPPE_API_TOKEN | Generated via Frappe backend |
Optional (post-deploy)
Edit /opt/ctms-deployment/.env.production to set:
| Variable | Purpose |
|---|---|
OPENAI_API_KEY | MCP AI chat features |
ACME_EMAIL | Let's Encrypt HTTPS (when using a domain) |
Recovery: resume-deploy
If the deploy dies mid-way (e.g., network timeout, Docker rate limit) but Supabase and Frappe are already running:
cd /opt/ctms-deployment
./zynctl.sh resume-deploy
This picks up from Step 7 (token generation) through Step 12 (health checks + summary). It verifies Supabase and Frappe containers are running before proceeding.
Day-2 Operations
Update images
./zynctl.sh update
Pulls latest zynomi/* images and recreates containers. Data volumes are preserved.
Update to new bundle version
tar xzf zynctl-bundle-<new-version>.tar.gz
cd zynctl-bundle-<new-version>
SERVER_HOST=<ip> ./zynctl.sh deploy
Restart a single service
cd /opt/ctms-deployment
DC="docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production"
$DC --profile analytics up -d cube
Use $DC up -d <service> (recreate) not docker compose restart — restart does not pick up .env.production changes.
Bundle Contents
zynctl-bundle-<version>/
├── zynctl.sh # Platform controller
├── zynctl.conf.example # Config template (Docker Hub creds included)
├── VERSION
├── .env.example # Environment template
├── docker-compose.yml # Base compose stack
├── docker-compose.prod.yml # Production overlay (IP-based ports)
├── caddy/ # Reverse proxy configs
├── config/ # OTEL, Fluent Bit configs
├── ctms-api-gateway/configs/ # KrakenD API gateway
├── scripts/frappe-setup/ # Frappe setup scripts (entrypoint, seed, token gen)
├── supabase/ # Supabase stack (13 services)
└── frappe-marley-health/ # Frappe stack
Troubleshooting
Docker Hub rate limit
Error: You have reached your unauthenticated pull rate limit
Set Docker Hub credentials in zynctl.conf or login manually:
docker login -u <your-username>
Then resume:
./zynctl.sh resume-deploy
Frappe API token not detected
docker exec frappe-marley-health-backend-1 bench --site frontend execute \
frappe.core.doctype.user.user.generate_keys --args "['Administrator']"
Patch the output into .env.production:
sed -i "s|FRAPPE_API_TOKEN=.*|FRAPPE_API_TOKEN=<key>:<secret>|" \
/opt/ctms-deployment/.env.production /opt/ctms-deployment/.env
Docker group membership after bootstrap
exit
ssh root@<server-ip> # Re-login picks up group change
Or use ./zynctl.sh full-deploy which handles this automatically.
Health check shows ❌ but service works
On Rocky Linux 10, Docker health checks may resolve localhost to IPv6 (::1) while services listen on IPv4. The service works fine — zynctl.sh health uses 127.0.0.1 to avoid this.
Re-run ctms-init after a failure
If ctms-init exited with an error (e.g., Healthcare Practitioner Stage 5 fails with 417 EXPECTATION FAILED), re-run it:
cd /opt/ctms-deployment
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.production --profile init run --rm ctms-init
All stages are idempotent — completed stages skip automatically, only the failed stage retries.