Skip to main content

AWS EC2 Deployment Guide

This guide covers deploying the CTMS platform on AWS EC2 with support for both IP-based access (Phase 1) and DNS-based access with HTTPS (Phase 2).

Prerequisites

  • AWS EC2 instance (Ubuntu 22.04 LTS recommended)
  • Docker and Docker Compose installed
  • Security group with required ports open
  • (Optional) Route53 hosted zone for DNS

Architecture

┌─────────────────────────────────────────────────────────────────┐
│ AWS EC2 Instance │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Docker Compose Stack │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Zynexa │ │ Sublink │ │ Cube │ │ MCP │ │ │
│ │ │ :3000 │ │ :3001 │ │ :4000 │ │ :8006 │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────────────────────┐ │ │
│ │ │ ODM │ │ KrakenD │ │ Caddy (Phase 2) │ │ │
│ │ │ :8001 │ │ :9080 │ │ :80 / :443 │ │ │
│ │ └─────────┘ └─────────┘ └─────────────────────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
│ │
│ Phase 1: Direct IP Access │ Phase 2: DNS + HTTPS
▼ ▼
http://IP:3000 https://app.domain.com
http://IP:4000 https://cube.domain.com

Phase 1: IP-Based Access

Direct access via EC2 public IP. Suitable for initial testing and client demos.

Security Group Configuration

Open these inbound ports:

PortProtocolDescription
22TCPSSH access
3000TCPZynexa web app
3001TCPSublink mobile app
4000TCPCube.dev API
8001TCPODM API
8006TCPMCP Server
9080TCPKrakenD API Gateway

Deployment Steps

# 1. SSH into EC2 instance
ssh -i your-key.pem ubuntu@<EC2_PUBLIC_IP>

# 2. Clone the repository
git clone https://github.com/zynomi/ctms.devops.git
cd ctms.devops

# 3. Set up environment
cp .env.example .env.production
nano .env.production # Edit with your values

# 4. Deploy with Phase 1 configuration
./scripts/install.sh phase1

# Or manually:
docker compose -f docker-compose.yml -f docker-compose.prod.yml --profile all up -d

# For instance-specific deployments:
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.<instance>.prod --profile all up -d

Instance-Specific Environment Files

For multi-tenant deployments, create instance-specific environment files:

# File naming convention
.env.example # Template (commit-safe)
.env.production # Default production
.env.<instance>.prod # Instance-specific production

# Examples
.env.zynomi.prod # Zynomi instance
.env.clientname.prod # Another instance

Deploy with the appropriate env file using --env-file:

docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.zynomi.prod \
--profile all up -d

Access URLs (Phase 1)

ServiceURL
Zynexahttp://<EC2_IP>:3000
Sublinkhttp://<EC2_IP>:3001
Cube.devhttp://<EC2_IP>:4000
MCP Serverhttp://<EC2_IP>:8006
ODM APIhttp://<EC2_IP>:8001
API Gatewayhttp://<EC2_IP>:9080

Phase 2: DNS with HTTPS

Production deployment with Route53 DNS and Let's Encrypt HTTPS certificates.

Prerequisites

  1. Register a domain (e.g., ctms.example.com)
  2. Create a Route53 hosted zone
  3. Point nameservers to Route53

Route53 Configuration

Create the following A records pointing to your EC2 public IP:

RecordTypeValue
app.ctms.example.comA<EC2_IP>
api.ctms.example.comA<EC2_IP>
mobile.ctms.example.comA<EC2_IP>
cube.ctms.example.comA<EC2_IP>
mcp.ctms.example.comA<EC2_IP>
odm.ctms.example.comA<EC2_IP>
observe.ctms.example.comA<EC2_IP>

Security Group Configuration

For Phase 2, update security group:

PortProtocolDescription
22TCPSSH access
80TCPHTTP (redirect to HTTPS)
443TCPHTTPS

Deployment Steps

# 1. Set environment variables
export DOMAIN=ctms.example.com

# 2. Run Phase 2 deployment
./scripts/install.sh phase2

# Or manually:
# Update Caddyfile with production config
cp caddy/Caddyfile.prod caddy/Caddyfile

# Start services
docker compose -f docker-compose.yml --profile all up -d

Access URLs (Phase 2)

ServiceURL
Zynexahttps://app.ctms.example.com
Sublinkhttps://mobile.ctms.example.com
Cube.devhttps://cube.ctms.example.com
MCP Serverhttps://mcp.ctms.example.com
ODM APIhttps://odm.ctms.example.com
API Gatewayhttps://api.ctms.example.com

Environment Variables

Create .env.production with these variables:

# Domain Configuration
DOMAIN=ctms.example.com

# Supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key

# Frappe Cloud
FRAPPE_URL=https://your-site.frappe.cloud
FRAPPE_API_TOKEN=your-api-token

# Database
DATABASE_URL=postgresql://user:pass@host:5432/ctms

# KrakenD
KRAKEND_PORT=9080
KRAKEND_TIMEOUT=3000
KRAKEND_LOG_LEVEL=INFO

# AI Services
NEXT_PUBLIC_CUBE_API_URL=https://cube.ctms.example.com
NEXT_PUBLIC_MCP_API_ENDPOINT=https://mcp.ctms.example.com
NEXT_PUBLIC_ODM_API_ENDPOINT=https://odm.ctms.example.com

Deployment Script

The install.sh script automates the deployment process:

#!/bin/bash
# Usage: ./scripts/install.sh [phase1|phase2]

PHASE=${1:-phase1}
DOMAIN=${DOMAIN:-localhost}

case $PHASE in
phase1)
echo "Deploying Phase 1: IP-based access"
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--profile all up -d
;;
phase2)
echo "Deploying Phase 2: DNS with HTTPS"
cp caddy/Caddyfile.prod caddy/Caddyfile
docker compose -f docker-compose.yml --profile all up -d
;;
esac

Health Checks

Verify services are running:

# Check container status
docker compose ps

# Check service health
curl http://localhost:3000/api/health
curl http://localhost:4000/readyz
curl http://localhost:9080/__health

Monitoring

View Logs

# All services
docker compose logs -f

# Specific service
docker compose logs -f zynexa

# Last 100 lines
docker compose logs --tail=100 -f

OpenObserve Dashboard

Access the observability dashboard at:

  • Phase 1: http://<EC2_IP>:5080
  • Phase 2: https://observe.ctms.example.com

Login: admin@ctms.local / Admin@123

Troubleshooting

HTTPS Certificate Issues

Caddy automatically obtains Let's Encrypt certificates. If certificates fail:

  1. Ensure ports 80/443 are open
  2. DNS records are properly configured
  3. Check Caddy logs: docker compose logs caddy

Container Crashes

# View logs for crashed container
docker compose logs zynexa

# Restart specific service
docker compose restart zynexa

Memory Issues

For smaller EC2 instances, you may need to limit services:

# Start only essential services
docker compose --profile core up -d

# Add analytics later if needed
docker compose --profile analytics up -d

Backup and Recovery

Backup Data

# Backup Docker volumes
docker run --rm -v ctms_cube_store:/data -v $(pwd):/backup \
ubuntu tar cvf /backup/cube-backup.tar /data

Restore Data

# Restore Docker volumes
docker run --rm -v ctms_cube_store:/data -v $(pwd):/backup \
ubuntu tar xvf /backup/cube-backup.tar -C /

Updates

Update Services

# Pull latest images
docker compose pull

# Restart with new images
docker compose up -d

Zero-Downtime Updates

# Update one service at a time
docker compose pull zynexa
docker compose up -d --no-deps zynexa

Security Best Practices

  1. Use IAM roles instead of access keys when possible
  2. Restrict security groups to only necessary ports
  3. Enable CloudWatch for monitoring and alerts
  4. Use Secrets Manager for sensitive environment variables
  5. Enable VPC for network isolation
  6. Regular updates - Keep Docker and system packages updated