Installation
The CTMS platform ships as a single self-contained bundle — everything needed to install, configure, and run the platform in one package. No GitHub token, no git clone, no manual Docker builds. One command gets you running.
SERVER_HOST=<your-ip> ./zynctl.sh full-deploy
The zynctl.sh script is written and tested for RHEL-family Linux distributions:
- Rocky Linux 9 / 10
- AlmaLinux 9
- Amazon Linux 2023
- RHEL 9 / CentOS Stream 9
The bootstrap step installs Docker and system dependencies using dnf. Other distributions (Ubuntu, Debian) are not currently supported by the automated bootstrap — you would need to install Docker manually first.
Prerequisites
Before you begin, make sure you have the following ready:
| Requirement | Details |
|---|---|
| VM / Server IP | A publicly reachable IP address for your server. You will configure this as SERVER_HOST. |
| OS | Rocky Linux 9/10, AlmaLinux 9, Amazon Linux 2023, or RHEL 9 |
| RAM | 8 GB minimum (16 GB recommended) |
| CPU | 4 vCPU minimum (8 vCPU recommended) |
| Disk | 50 GB free |
| Root Access | SSH root or sudo access to the server |
| Network | Outbound internet for Docker Hub image pulls |
| Docker Hub Account | Required — to avoid pull rate limits (anonymous: 100 pulls/6h). Sign up free → |
Without Docker Hub credentials, anonymous pulls are limited to 100 per 6 hours. A team of 4 sharing a server IP will hit this limit quickly. Always configure DOCKER_USERNAME and DOCKER_PASSWORD in zynctl.conf before deploying.
Ports Used
The following ports must be free on the server:
| Port | Service |
|---|---|
| 3000 | Zynexa (CTMS App) |
| 3001 | Sublink (Mobile) |
| 3200 | Elementary Reports |
| 5080 | OpenObserve |
| 8000 | Supabase Studio |
| 8001 | ODM API |
| 8006 | MCP Server (AI) |
| 8080 | Frappe |
| 9080 | API Gateway (KrakenD) |
Quick Start
1. Download the bundle
Download the latest install bundle using the presigned download URL provided by your Zynomi account representative.
# Paste the complete download URL provided by your account representative
curl -LO '<YOUR_DOWNLOAD_URL>'
Your download URL will look like this:
https://zynomi-ctms.s3.us-east-2.amazonaws.com/zynctl-bundle-<version>.tar.gz
?X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=<credential>
&X-Amz-Date=<timestamp>
&X-Amz-Expires=<seconds>
&X-Amz-SignedHeaders=host
&X-Amz-Signature=<signature>
Your account representative will provide the complete URL with all parameters filled in — paste it as-is. The -LO flag saves the file with its original name automatically.
Install bundles are hosted on a private S3 bucket. Each client receives a time-limited download token during onboarding. If your token has expired, contact your account representative or email contact@zynomi.com for a fresh one. See the Downloads FAQ for more details.
2. Install tar (if not already present)
sudo dnf install -y tar
3. Extract and configure
tar xzf zynctl-bundle-<version>.tar.gz
cd zynctl-bundle-<version>
# Create config
cp zynctl.conf.example zynctl.conf
Edit zynctl.conf with your server IP and Docker Hub credentials:
# ── Required ──────────────────────────────────
SERVER_HOST=<your-server-ip>
# ── Docker Hub (required to avoid rate limits) ─
DOCKER_USERNAME=<your-dockerhub-username>
DOCKER_PASSWORD=<your-dockerhub-password>
4. Deploy
# One command: bootstrap + deploy
./zynctl.sh full-deploy
Or step-by-step:
./zynctl.sh bootstrap # Install Docker, Compose, system tuning
exit # Re-login for Docker group membership
ssh root@<server-ip>
cd zynctl-bundle-<version>
./zynctl.sh deploy # Full 12-step deploy
5. Verify
./zynctl.sh health
Deployment takes ~10–15 minutes. All Docker images are pre-built — nothing compiles on the server.
What deploy Does
| Step | Action | ~Time |
|---|---|---|
| 1 | Copy bundle to /opt/ctms-deployment | 5s |
| 2 | Create .env.production from template, patch IP, generate secrets | 5s |
| 3 | Create ctms-network, authenticate Docker Hub | 2s |
| 4 | Start Supabase (13 services), extract service role key | 60s |
| 5 | Seed CTMS tables into Supabase | 30s |
| 6 | Start Frappe, wait for site creation and backend API readiness | 120s |
| 6b | Wait for Frappe setup container to exit (ensures token is ready) | 10–120s |
| 7 | Extract Frappe API token from setup container logs | 10s |
| 8 | Force-pull & run CTMS Init (DocTypes → Custom Fields → RBAC → Item Groups → Master Data → Practitioner) | 60s |
| 9 | Pull all CTMS Docker images | 60s |
| 10 | Start CTMS platform (core + analytics + observability) | 30s |
| 11 | Health-check 9 endpoints, seed 4 demo users | 30s |
| 12 | Print service URLs and credentials | — |
Commands Reference
Setup
| Command | Description |
|---|---|
./zynctl.sh bootstrap | Install Docker, Compose, tune system |
./zynctl.sh deploy | Full first-time deployment |
./zynctl.sh full-deploy | Bootstrap + Deploy in one shot |
./zynctl.sh resume-deploy | Resume from Step 6b (when Supabase + Frappe are already running) |
Operations
| Command | Description |
|---|---|
./zynctl.sh status | Show running containers |
./zynctl.sh health | Check all service endpoints |
./zynctl.sh seed-users | Seed demo users (idempotent) |
./zynctl.sh logs [service] | View container logs |
./zynctl.sh stop | Stop all stacks (reverse order) |
./zynctl.sh restart | Restart CTMS core services |
./zynctl.sh update | Pull latest images + restart |
./zynctl.sh refresh-token | Re-generate Frappe API token and patch .env.production |
./zynctl.sh post-snapshot | Fix IPs, tokens, and URLs after cloning from a snapshot |
./zynctl.sh destroy | Remove everything including data (irreversible) |
Diagnostics
| Command | Description |
|---|---|
./zynctl.sh env-check | Validate .env.production for placeholders |
./zynctl.sh info | Show version, config, and system info |
Services After Deployment
| Service | URL | Purpose |
|---|---|---|
| Zynexa | http://<IP>:3000 | Main CTMS application |
| Sublink | http://<IP>:3001 | Mobile companion app |
| API Gateway | http://<IP>:9080 | KrakenD gateway |
| Cube.js | http://<IP>:4000 | Analytics semantic layer |
| MCP Server | http://<IP>:8006 | AI integration |
| ODM API | http://<IP>:8001 | ODM document service |
| Grafana | http://<IP>:3100 | Analytics dashboards |
| OpenObserve | http://<IP>:5080 | Observability |
| Elementary Reports | http://<IP>:3200 | dbt data quality reports |
| Supabase Studio | http://<IP>:8000 | Database management |
| Frappe | http://<IP>:8080 | ERP / DocType engine |
Default Credentials
| Account | Username | Password |
|---|---|---|
| Demo — Platform Admin | kiran.v@zynomi.com | ●●●●●● |
| Demo — Study Coordinator | michael.x@zynomi.com | ●●●●●● |
| Demo — Study Designer | roshini.s@zynomi.com | ●●●●●● |
| Demo — Principal Investigator | peter.p@zynomi.com | ●●●●●● |
| Frappe Admin | Administrator | ●●●●●● |
| OpenObserve | admin@ctms.local | ●●●●●● |
| Grafana | admin@ctms.local | ●●●●●● |
All demo users share the same default password. Run ./zynctl.sh seed-users to see credentials printed in the terminal, or check the deploy log output.
Configuration
zynctl.conf
# Server IP (required — or pass via SERVER_HOST env var)
SERVER_HOST=203.0.113.50
# Deploy directory (default: /opt/ctms-deployment)
# DEPLOY_PATH=/opt/ctms-deployment
# Docker Hub credentials (required to avoid rate limits)
DOCKER_USERNAME=your-dockerhub-username
DOCKER_PASSWORD=your-dockerhub-password
All settings can also be passed as environment variables:
SERVER_HOST=1.2.3.4 DOCKER_USERNAME=myuser DOCKER_PASSWORD=mypass ./zynctl.sh deploy
Auto-Patched Variables
These are set automatically during deploy — no manual editing needed:
| Variable | Source |
|---|---|
EC2_PUBLIC_IP | From SERVER_HOST |
NEXTAUTH_URL | http://<SERVER_HOST>:3000 |
NEXTAUTH_SECRET | Auto-generated (openssl rand) |
SUPABASE_SERVICE_ROLE_KEY | Extracted from Supabase config |
FRAPPE_API_TOKEN | Generated via Frappe backend |
Optional (post-deploy)
Edit /opt/ctms-deployment/.env.production to set:
| Variable | Purpose |
|---|---|
OPENAI_API_KEY | MCP AI chat features |
ACME_EMAIL | Let's Encrypt HTTPS (when using a domain) |
Recovery: resume-deploy
If the deploy dies mid-way (e.g., network timeout, Docker rate limit) but Supabase and Frappe are already running:
cd /opt/ctms-deployment
./zynctl.sh resume-deploy
This picks up from Step 6b (wait for setup container) through Step 12 (health checks + summary). It verifies Supabase and Frappe containers are running before proceeding.
Optional: Seed Demo Data
The bundle deployment automatically runs platform provisioning (DocTypes, RBAC, master data, practitioner) and seeds 4 demo users during Step 11. If you skipped the user seeding step or want to add demo patients for testing:
# Seed 4 demo users (idempotent — safe to re-run)
./zynctl.sh seed-users
# Seed 20 synthetic demo patients (optional)
cd /opt/ctms-deployment
DC="docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production"
$DC --profile init run --rm ctms-patient-seed
For detailed provisioning options — dry-run mode, selective stages, environment variables — see the Platform Provisioning Commands recipe.
Bundle Contents
zynctl-bundle-<version>/
├── zynctl.sh # Platform controller
├── zynctl.conf.example # Config template (Docker Hub creds included)
├── VERSION
├── .env.example # Environment template
├── docker-compose.yml # Base compose stack
├── docker-compose.prod.yml # Production overlay (IP-based ports)
├── caddy/ # Reverse proxy configs
├── config/ # OTEL, Fluent Bit configs
├── ctms-api-gateway/configs/ # KrakenD API gateway
├── scripts/frappe-setup/ # Frappe setup scripts (entrypoint, seed, token gen)
├── supabase/ # Supabase stack (13 services)
└── frappe-marley-health/ # Frappe stack
What's Next?
After a successful deployment, run the Deployment Verification recipe to confirm all 10 services are healthy and seed data was provisioned correctly. It includes a one-shot verification script and commands to manually seed the Healthcare Practitioner, demo users, and demo patients if needed.
Related Docs
- Deployment Verification — end-to-end health checks, seed data verification, and manual provisioning commands
- Debugging & Troubleshooting — Docker Hub rate limits, Frappe token issues, permission errors, health check false negatives, ctms-init recovery
- System Requirements
- Docker Compose Profiles
- Environment Variables
- Platform Runbook
- GitHub Actions CI/CD