AWS EC2 Deployment Guide
This guide covers deploying the CTMS platform on AWS EC2 with support for both IP-based access (Phase 1) and DNS-based access with HTTPS (Phase 2).
Prerequisites
- AWS EC2 instance (Ubuntu 22.04 LTS recommended)
- Docker and Docker Compose installed
- Security group with required ports open
- (Optional) Route53 hosted zone for DNS
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ AWS EC2 Instance │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Docker Compose Stack │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Zynexa │ │ Sublink │ │ Cube │ │ MCP │ │ │
│ │ │ :3000 │ │ :3001 │ │ :4000 │ │ :8006 │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────────────────────┐ │ │
│ │ │ ODM │ │ KrakenD │ │ Caddy (Phase 2) │ │ │
│ │ │ :8001 │ │ :9080 │ │ :80 / :443 │ │ │
│ │ └─────────┘ └─────────┘ └─────────────────────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
│ │
│ Phase 1: Direct IP Access │ Phase 2: DNS + HTTPS
▼ ▼
http://IP:3000 https://app.domain.com
http://IP:4000 https://cube.domain.com
Phase 1: IP-Based Access
Direct access via EC2 public IP. Suitable for initial testing and client demos.
Security Group Configuration
Open these inbound ports:
| Port | Protocol | Description |
|---|---|---|
| 22 | TCP | SSH access |
| 3000 | TCP | Zynexa web app |
| 3001 | TCP | Sublink mobile app |
| 4000 | TCP | Cube.dev API |
| 8001 | TCP | ODM API |
| 8006 | TCP | MCP Server |
| 9080 | TCP | KrakenD API Gateway |
Deployment Steps
# 1. SSH into EC2 instance
ssh -i your-key.pem ubuntu@<EC2_PUBLIC_IP>
# 2. Clone the repository
git clone https://github.com/zynomi/ctms.devops.git
cd ctms.devops
# 3. Set up environment
cp .env.example .env.production
nano .env.production # Edit with your values
# 4. Deploy with Phase 1 configuration
./scripts/install.sh phase1
# Or manually:
docker compose -f docker-compose.yml -f docker-compose.prod.yml --profile all up -d
# For instance-specific deployments:
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.<instance>.prod --profile all up -d
Instance-Specific Environment Files
For multi-tenant deployments, create instance-specific environment files:
# File naming convention
.env.example # Template (commit-safe)
.env.production # Default production
.env.<instance>.prod # Instance-specific production
# Examples
.env.zynomi.prod # Zynomi instance
.env.clientname.prod # Another instance
Deploy with the appropriate env file using --env-file:
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.zynomi.prod \
--profile all up -d
Access URLs (Phase 1)
| Service | URL |
|---|---|
| Zynexa | http://<EC2_IP>:3000 |
| Sublink | http://<EC2_IP>:3001 |
| Cube.dev | http://<EC2_IP>:4000 |
| MCP Server | http://<EC2_IP>:8006 |
| ODM API | http://<EC2_IP>:8001 |
| API Gateway | http://<EC2_IP>:9080 |
Phase 2: DNS with HTTPS
Production deployment with Route53 DNS and Let's Encrypt HTTPS certificates.
Prerequisites
- Register a domain (e.g.,
ctms.example.com) - Create a Route53 hosted zone
- Point nameservers to Route53
Route53 Configuration
Create the following A records pointing to your EC2 public IP:
| Record | Type | Value |
|---|---|---|
app.ctms.example.com | A | <EC2_IP> |
api.ctms.example.com | A | <EC2_IP> |
mobile.ctms.example.com | A | <EC2_IP> |
cube.ctms.example.com | A | <EC2_IP> |
mcp.ctms.example.com | A | <EC2_IP> |
odm.ctms.example.com | A | <EC2_IP> |
observe.ctms.example.com | A | <EC2_IP> |
Security Group Configuration
For Phase 2, update security group:
| Port | Protocol | Description |
|---|---|---|
| 22 | TCP | SSH access |
| 80 | TCP | HTTP (redirect to HTTPS) |
| 443 | TCP | HTTPS |
Deployment Steps
# 1. Set environment variables
export DOMAIN=ctms.example.com
# 2. Run Phase 2 deployment
./scripts/install.sh phase2
# Or manually:
# Update Caddyfile with production config
cp caddy/Caddyfile.prod caddy/Caddyfile
# Start services
docker compose -f docker-compose.yml --profile all up -d
Access URLs (Phase 2)
| Service | URL |
|---|---|
| Zynexa | https://app.ctms.example.com |
| Sublink | https://mobile.ctms.example.com |
| Cube.dev | https://cube.ctms.example.com |
| MCP Server | https://mcp.ctms.example.com |
| ODM API | https://odm.ctms.example.com |
| API Gateway | https://api.ctms.example.com |
Environment Variables
Create .env.production with these variables:
# Domain Configuration
DOMAIN=ctms.example.com
# Supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key
# Frappe Cloud
FRAPPE_URL=https://your-site.frappe.cloud
FRAPPE_API_TOKEN=your-api-token
# Database
DATABASE_URL=postgresql://user:pass@host:5432/ctms
# KrakenD
KRAKEND_PORT=9080
KRAKEND_TIMEOUT=3000
KRAKEND_LOG_LEVEL=INFO
# AI Services
NEXT_PUBLIC_CUBE_API_URL=https://cube.ctms.example.com
NEXT_PUBLIC_MCP_API_ENDPOINT=https://mcp.ctms.example.com
NEXT_PUBLIC_ODM_API_ENDPOINT=https://odm.ctms.example.com
Deployment Script
The install.sh script automates the deployment process:
#!/bin/bash
# Usage: ./scripts/install.sh [phase1|phase2]
PHASE=${1:-phase1}
DOMAIN=${DOMAIN:-localhost}
case $PHASE in
phase1)
echo "Deploying Phase 1: IP-based access"
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--profile all up -d
;;
phase2)
echo "Deploying Phase 2: DNS with HTTPS"
cp caddy/Caddyfile.prod caddy/Caddyfile
docker compose -f docker-compose.yml --profile all up -d
;;
esac
Health Checks
Verify services are running:
# Check container status
docker compose ps
# Check service health
curl http://localhost:3000/api/health
curl http://localhost:4000/readyz
curl http://localhost:9080/__health
Monitoring
View Logs
# All services
docker compose logs -f
# Specific service
docker compose logs -f zynexa
# Last 100 lines
docker compose logs --tail=100 -f
OpenObserve Dashboard
Access the observability dashboard at:
- Phase 1:
http://<EC2_IP>:5080 - Phase 2:
https://observe.ctms.example.com
Login: admin@ctms.local / Admin@123
Troubleshooting
HTTPS Certificate Issues
Caddy automatically obtains Let's Encrypt certificates. If certificates fail:
- Ensure ports 80/443 are open
- DNS records are properly configured
- Check Caddy logs:
docker compose logs caddy
Container Crashes
# View logs for crashed container
docker compose logs zynexa
# Restart specific service
docker compose restart zynexa
Memory Issues
For smaller EC2 instances, you may need to limit services:
# Start only essential services
docker compose --profile core up -d
# Add analytics later if needed
docker compose --profile analytics up -d
Backup and Recovery
Backup Data
# Backup Docker volumes
docker run --rm -v ctms_cube_store:/data -v $(pwd):/backup \
ubuntu tar cvf /backup/cube-backup.tar /data
Restore Data
# Restore Docker volumes
docker run --rm -v ctms_cube_store:/data -v $(pwd):/backup \
ubuntu tar xvf /backup/cube-backup.tar -C /
Updates
Update Services
# Pull latest images
docker compose pull
# Restart with new images
docker compose up -d
Zero-Downtime Updates
# Update one service at a time
docker compose pull zynexa
docker compose up -d --no-deps zynexa
Security Best Practices
- Use IAM roles instead of access keys when possible
- Restrict security groups to only necessary ports
- Enable CloudWatch for monitoring and alerts
- Use Secrets Manager for sensitive environment variables
- Enable VPC for network isolation
- Regular updates - Keep Docker and system packages updated