Debugging & Troubleshooting
This guide covers common issues and their solutions when deploying and running the Zynomi platform.
Common Issues
Authentication Issues
Problem: Users cannot log in
Symptoms:
- Login fails with "Invalid credentials"
- Session expires immediately
Solutions:
- Verify Supabase configuration:
# Check environment variables
echo $NEXT_PUBLIC_SUPABASE_URL
echo $NEXT_PUBLIC_SUPABASE_ANON_KEY
-
Check Supabase dashboard for authentication logs
-
Verify RLS policies are correctly configured
Problem: JWT token expired
Solution:
- Ensure refresh token flow is implemented
- Check token expiration settings in Supabase
Database Connection Issues
Problem: Cannot connect to database
Symptoms:
- "Connection refused" errors
- Timeout on database queries
Solutions:
- Verify database URL format:
postgresql://<DB_USER>:<DB_PASSWORD>@<DB_HOST>:<DB_PORT>/<DB_NAME>
-
Check network/firewall settings
-
Verify Supabase project is active
API Gateway Issues
Problem: KrakenD returns 502 Bad Gateway
Solutions:
- Check backend service health:
curl https://your-site.frappe.cloud/api/method/ping
- Verify KrakenD configuration:
krakend check -c krakend.json
- Review KrakenD logs:
fly logs -a your-krakend-app
DNS Resolution & API URL Issues
Problem: Login returns 404 or "User profile not found in Frappe"
Symptoms:
POST /api/loginreturns 404- Error message: "User profile not found in Frappe"
- Browser DevTools shows the request reaching the server but the Frappe profile lookup fails
Root Cause:
The Zynexa Next.js container uses two separate API URLs:
| Variable | Used By | Purpose |
|---|---|---|
RUNTIME_API_BASE_URL | Server-side (Node.js) | Login handler, SSR data fetching |
RUNTIME_API_CLIENT_URL | Client-side (Browser JS) | Permission grid, RBAC lookups |
In Docker, the server-side URL must resolve from inside the container. If it points to an external hostname (e.g., ctms.example.com) that the container cannot resolve via DNS, all server-side Frappe calls will fail.
Solutions:
- Set the server-side URL to the container's own localhost:
# .env file
NEXT_PUBLIC_API_BASE_URL=http://localhost:3000/api/v1
- Or add the hostname to Docker's
extra_hostsso the container can resolve it:
# docker-compose.yml
services:
zynexa:
extra_hosts:
- "ctms.example.com:host-gateway"
- Verify resolution from inside the container:
docker exec ctms-zynexa sh -c "wget -q -O- http://localhost:3000/api/health"
Problem: Permissions page shows only 20 resources instead of 24
Symptoms:
- Permissions management page at
/management/permissionsloads but shows incomplete data - Only 20 resources visible, checkboxes unchecked for some permission groups
- Browser DevTools shows API calls succeeding with 200 status
Root Cause:
The generic /api/v1/doctype/[entity] route handler was reading the query parameter page_length but the frontend sends limit_page_length (Frappe's native parameter name). Since the parameter was never matched, Frappe applied its default limit of 20 records.
Solution:
This was fixed in commit 893a612 — the route handler now reads limit_page_length first, with page_length as fallback. Ensure your Docker image is up to date:
docker pull --platform linux/amd64 zynomi/zynexa:latest
docker compose --env-file .env.production up -d zynexa
Problem: Browser API calls go to wrong domain (cross-origin)
Symptoms:
- Browsing at
https://zynexa.localhostbut network tab shows requests tohttps://ctms.example.com - SSL certificate errors or CORS failures
- "Provisional headers shown" in DevTools
Root Cause:
RUNTIME_API_CLIENT_URL (the browser-side API URL) points to a different domain than the one you're browsing. The browser makes cross-origin requests which may fail due to self-signed certificates or CORS.
Solution:
Set NEXT_PUBLIC_API_CLIENT_URL to match your browsing domain:
# If browsing at https://zynexa.localhost
NEXT_PUBLIC_API_CLIENT_URL=https://zynexa.localhost/api/v1
# If browsing at https://ctms.example.com
NEXT_PUBLIC_API_CLIENT_URL=https://ctms.example.com/api/v1
Then recreate the container:
docker compose --env-file .env.production up -d zynexa
Deployment Issues
Problem: PLACEHOLDER_WILL_BE_PATCHED appears in runtime URLs
Symptoms:
- Browser shows API calls to
http://placeholder_will_be_patched:9080/api/v1/... - Zynexa container's
RUNTIME_*env vars containPLACEHOLDER_WILL_BE_PATCHEDinstead of the server IP
Root Cause:
EC2_PUBLIC_IP=PLACEHOLDER_WILL_BE_PATCHED was never patched in .env.production on the server. This typically happens when someone runs git reset --hard origin/main on the server, which overwrites the patched .env.production with the template.
Solutions:
- Re-patch the env file and recreate containers:
cd /opt/ctms-deployment
sed -i "s|PLACEHOLDER_WILL_BE_PATCHED|YOUR_SERVER_IP|g" .env.production .env
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production up -d --force-recreate
- Prevention: Always use
./install.sh updateinstead of manualgit reset --hard— it re-patches all environment variables automatically after copying the template.
Problem: Frappe AuthenticationError on user signup
Symptoms:
- Zynexa logs show
frappe.exceptions.AuthenticationError - User creation via the web app fails
Root Cause:
FRAPPE_API_TOKEN is still the template placeholder (PLACEHOLDER:WILL_BE_REPLACED_AFTER_FRAPPE_SETUP) or is an invalid token.
Solutions:
- Generate a fresh Frappe API token:
docker exec frappe-marley-health-backend-1 bench execute \
frappe.core.doctype.user.user.generate_keys --args "['Administrator']"
- Verify it works:
curl -s -H "Authorization: token <api_key>:<api_secret>" \
http://localhost:8080/api/method/frappe.auth.get_logged_user
- Patch into env files and recreate:
sed -i 's|FRAPPE_API_TOKEN=.*|FRAPPE_API_TOKEN="<api_key>:<api_secret>"|' .env.production .env
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production up -d --force-recreate api-gateway zynexa
Problem: SUPABASE_SERVICE_ROLE_KEY is not configured
Symptoms:
- Zynexa logs show
SUPABASE_SERVICE_ROLE_KEY is not configured - Server-side Supabase operations fail
Root Cause:
The SUPABASE_SERVICE_ROLE_KEY environment variable is missing or empty in .env.production. The install.sh deploy script auto-extracts this from supabase/.env in Step 4b, but it may be missing if:
- You deployed before this fix was added
supabase/.envdoesn't exist yet when the extraction runs
Solution:
cd /opt/ctms-deployment
SRK=$(grep '^SERVICE_ROLE_KEY=' supabase/.env | cut -d= -f2-)
sed -i "s|SUPABASE_SERVICE_ROLE_KEY=.*|SUPABASE_SERVICE_ROLE_KEY=$SRK|" .env.production .env
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production up -d --force-recreate zynexa
Problem: External access blocked (Hetzner Cloud Firewall)
Symptoms:
- All services respond correctly when curled from inside the server (
curl localhost:3000) - External access from browser times out (HTTP 000 / connection refused)
- Server-level
iptables -LshowsINPUT ACCEPT— no local firewall blocking
Root Cause:
Cloud providers like Hetzner have a cloud-level firewall that is external to the VM. Even if the VM's own firewall is open, the cloud firewall blocks traffic before it reaches the server.
Solutions:
-
Hetzner Cloud Console: Go to Firewalls → select the firewall attached to your server → add inbound rules for TCP ports:
22, 80, 443, 3000, 3001, 4000, 5080, 8000, 8001, 8006, 8080, 9080from source0.0.0.0/0 -
hcloud CLI (if installed on server with API token):
hcloud firewall add-rule <firewall-id> --direction in --protocol tcp \
--port 3000 --source-ips 0.0.0.0/0 --source-ips ::/0
- Remove the firewall entirely (for dev/test environments): Hetzner Console → Firewalls → select → Actions → Delete
On AWS, the equivalent is Security Groups. Open the same ports in your EC2 instance's security group.
Problem: Vercel deployment fails
Solutions:
-
Check build logs in Vercel dashboard
-
Verify all environment variables are set
-
Test build locally:
npm run build
Problem: Docker container won't start
Solutions:
- Check container logs:
docker logs container-name
-
Verify Dockerfile syntax
-
Ensure all dependencies are installed
Deploying KrakenD
This guide walks through deploying KrakenD on Fly.io.
Prerequisites
- A Fly.io account
- Flyctl command-line tool installed
- KrakenD configuration files ready
Steps
1. Prepare KrakenD Configuration
Ensure your KrakenD configuration is ready and the necessary files are in the configs directory.
2. Navigate to KrakenD Directory
cd krakend
3. Initialize Fly.io Application
flyctl auth login
flyctl launch
4. Destroy Previous Instance (If needed)
fly apps destroy krakend
5. Deploy the New Image
flyctl deploy
Logging and Monitoring
Enable Debug Logging
Add to your environment:
DEBUG=true
LOG_LEVEL=debug
View Logs
Vercel:
vercel logs your-app --follow
Fly.io:
fly logs -a your-app
Docker:
docker logs -f container-name
Health Checks
API Health Check
curl https://api.your-domain.com/__health
Database Health Check
curl https://your-project.supabase.co/rest/v1/ \
-H "apikey: YOUR_ANON_KEY"
Frappe Health Check
curl https://your-site.frappe.cloud/api/method/ping
Getting Help
If you're still experiencing issues:
- Check the project issue tracker
- Review the API Reference documentation
- Contact support at support@zynomi.com