Skip to content

Deployment

This guide covers how to take Changemaker Lite from a local development setup to a publicly accessible production deployment. The main decision is how to expose your services to the internet.

Architecture Overview

Regardless of which exposure method you choose, the internal architecture is the same:

Internet → [Your exposure method] → Nginx (port 80) → Backend Services

Nginx handles all subdomain routing internally. Every service is accessed through nginx on port 80, which proxies to the correct container based on the Host header.

Subdomain Service Container Port
app.DOMAIN Admin GUI + public pages 3000
api.DOMAIN Express API 4000
media.DOMAIN Fastify Media API 4100
DOMAIN (root) MkDocs documentation site 4004
db.DOMAIN NocoDB 8091
docs.DOMAIN MkDocs live preview 4003
code.DOMAIN Code Server 8888
git.DOMAIN Gitea 3030
n8n.DOMAIN Workflow automation 5678
home.DOMAIN Homepage dashboard 3010
listmonk.DOMAIN Newsletter manager 9001
mail.DOMAIN MailHog (dev email) 8025
qr.DOMAIN Mini QR generator 8089
draw.DOMAIN Excalidraw whiteboard 8090
vault.DOMAIN Vaultwarden password manager 8445
chat.DOMAIN Rocket.Chat team chat
events.DOMAIN Gancio event management 8092
grafana.DOMAIN Monitoring dashboards 3005

Exposure Methods

Option 1: Pangolin + Newt Tunnel (Recommended)

Admin GUI: Tunnel Management Page

The admin dashboard includes a dedicated Tunnel Management page at Admin → Settings → Tunnel. This page provides:

  • Live status of the Pangolin connection and Newt container health
  • Step-by-step setup instructions if credentials aren't configured yet
  • Full resource table listing every service, its domain, and target — useful as a reference when creating resources in the Pangolin dashboard
  • API-based site creation as an alternative to the Pangolin dashboard UI
  • Restart Newt button for quick container restarts without the terminal

If you're unsure about any step above, the Tunnel page walks you through the same process interactively.

Pangolin is a self-hosted tunnel server. The Newt client container runs alongside your stack and establishes an outbound connection to your Pangolin server, which then routes public traffic back through the tunnel. No port forwarding or static IP required.

Advantages:

  • No port forwarding needed on your router/firewall
  • Works behind CGNAT, double NAT, or restrictive networks
  • SSL/TLS handled by the Pangolin server
  • Self-hosted — you control the tunnel infrastructure
  • Built-in access control (optional per-resource authentication)

Requirements:

  • A Pangolin server (self-hosted on a VPS with a public IP)
  • A domain with DNS pointing to the Pangolin server
  • Pangolin API key and organization ID

Step 1: Configure Pangolin Credentials

If you used config.sh, you may have already set these. Otherwise, add to your .env:

PANGOLIN_API_URL=https://api.your-pangolin-server.org/v1
PANGOLIN_API_KEY=your_api_key_here
PANGOLIN_ORG_ID=your_org_id

Step 2: Create a Site in Pangolin

Log in to your Pangolin dashboard and create a new site:

  1. Navigate to SitesCreate New Site
  2. Choose type: Newt
  3. Enter a name (e.g., changemaker-yourdomain.org)
  4. Choose a subnet (e.g., 100.90.128.3/24)
  5. Select an exit node (if applicable)
  6. Click Create Site
  7. Copy the credentials — you'll need the Site ID, Newt ID, and Newt Secret

Save the credentials

The Newt Secret is only shown once during site creation. Copy it immediately.

Step 3: Update .env with Site Credentials

PANGOLIN_SITE_ID=your_site_id
PANGOLIN_ENDPOINT=https://your-pangolin-server.org
PANGOLIN_NEWT_ID=your_newt_id
PANGOLIN_NEWT_SECRET=your_newt_secret

Step 4: Start the Newt Container

docker compose up -d newt

The Newt container connects to nginx (its only dependency) and establishes the tunnel:

# From docker-compose.yml
newt:
  image: fosrl/newt
  container_name: newt-changemaker
  restart: unless-stopped
  environment:
    - PANGOLIN_ENDPOINT=${PANGOLIN_ENDPOINT}
    - NEWT_ID=${PANGOLIN_NEWT_ID}
    - NEWT_SECRET=${PANGOLIN_NEWT_SECRET}
  depends_on:
    - nginx

Verify the connection:

docker compose logs newt --tail 20

You should see a successful connection message.

Step 5: Create Public HTTP Resources

In the Pangolin dashboard, create an HTTP resource for each service you want exposed. All resources point to nginx:80 — nginx handles the routing internally.

Required resources (minimum for a working deployment):

Resource Name Domain Target Auth
Admin GUI app.yourdomain.org nginx:80 Not Protected
API Server api.yourdomain.org nginx:80 Not Protected
Public Site yourdomain.org nginx:80 Not Protected

Optional resources (add as needed):

Resource Name Domain Target
Media API media.yourdomain.org nginx:80
NocoDB db.yourdomain.org nginx:80
Documentation docs.yourdomain.org nginx:80
Code Server code.yourdomain.org nginx:80
Gitea git.yourdomain.org nginx:80
Grafana grafana.yourdomain.org nginx:80

Set resources to Not Protected

By default, Pangolin may enable authentication on new resources. This causes 302 redirects to the Pangolin login page instead of reaching your services. Set each resource to Not Protected (public access) unless you intentionally want Pangolin SSO in front of it.

Step 6: Update CORS for Production

Add your production domain to CORS_ORIGINS in .env:

CORS_ORIGINS=https://app.yourdomain.org,http://localhost:3000,http://localhost

Then restart the API:

docker compose restart api

Step 7: Verify

# Should return JSON (not a 302 redirect)
curl https://api.yourdomain.org/api/health

# Admin GUI should load
curl -I https://app.yourdomain.org

Option 2: Cloudflare Tunnel

Cloudflare Tunnel (cloudflared) provides a similar zero-trust tunnel approach using Cloudflare's network. No port forwarding needed, and you get Cloudflare's CDN and DDoS protection.

Advantages:

  • Free tier available
  • Built-in CDN and DDoS protection
  • No port forwarding needed
  • Managed SSL certificates

Disadvantages:

  • Proprietary service (not self-hosted)
  • Cloudflare sees all traffic (no end-to-end encryption to your origin)
  • Subject to Cloudflare's Terms of Service

Setup

  1. Create a Cloudflare Tunnel in the Zero Trust dashboard

  2. Add a cloudflared service to your docker-compose.yml:

    cloudflared:
      image: cloudflare/cloudflared:latest
      container_name: cloudflared-changemaker
      restart: unless-stopped
      command: tunnel run
      environment:
        - TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
      depends_on:
        - nginx
      networks:
        - changemaker-lite
    
  3. Add your tunnel token to .env:

    CLOUDFLARE_TUNNEL_TOKEN=your_tunnel_token_here
    
  4. Configure public hostnames in the Cloudflare dashboard, all pointing to http://nginx:80:

    Hostname Service
    app.yourdomain.org http://nginx:80
    api.yourdomain.org http://nginx:80
    yourdomain.org http://nginx:80
    (add more as needed) http://nginx:80
  5. Start the tunnel:

    docker compose up -d cloudflared
    

Note

The cloudflared service is not included in the default docker-compose.yml. Add it manually if you choose this method. The Newt service can be removed or left stopped.


Option 3: Direct DNS + Reverse Proxy

If your server has a public IP address (e.g., a VPS or dedicated server), you can point DNS directly to it and use nginx with SSL certificates.

Advantages:

  • No tunnel overhead or third-party dependency
  • Full control over the network path
  • Lowest latency

Disadvantages:

  • Requires a public IP and open ports (80, 443)
  • You manage SSL certificates yourself
  • Server IP is exposed

Setup

  1. Point DNS for your domain and all subdomains to your server's IP:

    A     yourdomain.org        → YOUR_SERVER_IP
    A     *.yourdomain.org      → YOUR_SERVER_IP
    

    Or use individual A records for each subdomain if your DNS provider doesn't support wildcards.

  2. Open ports 80 and 443 on your server's firewall.

  3. Install Certbot (or another ACME client) for SSL certificates:

    # Ubuntu/Debian
    sudo apt install certbot
    
    # Get a wildcard certificate with DNS challenge
    sudo certbot certonly --manual --preferred-challenges dns \
      -d yourdomain.org -d '*.yourdomain.org'
    

    Alternatively, use the Certbot Docker image or a Let's Encrypt companion container.

  4. Update nginx to listen on 443 with your certificates. Add an SSL server block to nginx/conf.d/ssl.conf:

    server {
        listen 443 ssl;
        server_name app.yourdomain.org;
    
        ssl_certificate /etc/nginx/ssl/fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/privkey.pem;
    
        location / {
            proxy_pass http://changemaker-v2-admin:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }
    
    # Repeat for api.yourdomain.org, media.yourdomain.org, etc.
    # Or use a single server block with $host matching
    
  5. Mount certificates into the nginx container via docker-compose.yml:

    nginx:
      volumes:
        - /etc/letsencrypt/live/yourdomain.org:/etc/nginx/ssl:ro
    
  6. Set up auto-renewal with a cron job or systemd timer:

    0 3 * * * certbot renew --quiet && docker compose restart nginx
    

Traefik alternative

If you prefer automatic SSL and don't want to manage nginx SSL config manually, consider replacing nginx with Traefik. Traefik can auto-discover Docker containers and provision Let's Encrypt certificates automatically. This would require adapting the container labels and removing the nginx service.


Option 4: Tailscale / WireGuard (Private Access)

For deployments that should only be accessible to specific people (not the general public), a mesh VPN like Tailscale or plain WireGuard gives you private networking without exposing anything to the internet.

Use cases:

  • Internal team deployments
  • Development/staging servers
  • Access from mobile devices without public exposure

Tailscale Setup

  1. Install Tailscale on your server and client devices
  2. Access services via Tailscale IP (e.g., http://100.x.x.x:3000)
  3. Optionally use Tailscale Funnel to selectively expose specific services publicly

WireGuard Setup

  1. Set up a WireGuard server on your host
  2. Connect client devices via WireGuard config
  3. Access services via the WireGuard interface IP

Note

With private access methods, you may not need subdomain routing at all. Access services directly by port: http://server-ip:3000 (admin), http://server-ip:4000 (API), etc.


Production Checklist

Before going live, verify each item:

Security

  • All placeholder passwords changed (grep -c "REQUIRED_STRONG" .env should return 0)
  • NODE_ENV=production set in .env
  • ENCRYPTION_KEY set and differs from JWT secrets
  • EMAIL_TEST_MODE=false (unless you want MailHog in production)
  • CORS_ORIGINS includes your production domain
  • Admin password changed after first login
  • Redis password set (REDIS_PASSWORD)

Networking

  • DNS records configured for your domain and subdomains
  • SSL/TLS working (tunnel handles this, or manual certs)
  • All Pangolin resources set to "Not Protected" (if using Pangolin)
  • curl https://api.yourdomain.org/api/health returns JSON

Services

  • Core services running: docker compose ps shows api, admin, v2-postgres, redis, nginx healthy
  • Database migrated: docker compose exec api npx prisma migrate deploy
  • Database seeded: docker compose exec api npx prisma db seed
  • Admin GUI accessible at https://app.yourdomain.org

Backups

  • Backup script tested: ./scripts/backup.sh
  • Backup cron job configured (see Backups below)
  • Restore procedure tested at least once

Monitoring (Optional)

  • Monitoring stack started: docker compose --profile monitoring up -d
  • Grafana accessible and dashboards loading
  • Alert rules configured in Alertmanager

Backups

The included backup script dumps PostgreSQL databases, archives uploads, and optionally uploads to S3.

Running a Backup

./scripts/backup.sh

This creates a timestamped directory under ./backups/ containing:

  • changemaker_v2.sql.gz — Main PostgreSQL dump (compressed)
  • listmonk.sql.gz — Listmonk database dump (if running)
  • uploads.tar.gz — Media uploads archive
  • manifest.json — Backup metadata

Options

# Upload to S3 (requires AWS CLI + S3_BUCKET env var)
./scripts/backup.sh --s3

# Custom retention (delete local backups older than N days)
./scripts/backup.sh --retention 14

Automated Backups

Add a cron job for daily backups:

# Edit crontab
crontab -e

# Add daily backup at 3 AM
0 3 * * * /path/to/changemaker.lite/scripts/backup.sh >> /var/log/changemaker-backup.log 2>&1

# With S3 upload
0 3 * * * /path/to/changemaker.lite/scripts/backup.sh --s3 >> /var/log/changemaker-backup.log 2>&1

Restore

# Restore main database
gunzip -c backups/changemaker-v2-backup-TIMESTAMP/changemaker_v2.sql.gz | \
  docker compose exec -T v2-postgres psql -U changemaker changemaker_v2

# Restore Listmonk database
gunzip -c backups/changemaker-v2-backup-TIMESTAMP/listmonk.sql.gz | \
  docker compose exec -T listmonk-db psql -U listmonk listmonk

# Restore uploads
tar xzf backups/changemaker-v2-backup-TIMESTAMP/uploads.tar.gz -C ./

Monitoring

The monitoring stack runs behind a Docker Compose profile and is not started by default.

Starting the Monitoring Stack

docker compose --profile monitoring up -d

This starts:

Service Port Purpose
Prometheus 9090 Metrics collection and queries
Grafana 3005 Dashboards and visualization
Alertmanager 9093 Alert routing and notifications
cAdvisor 8086 Container resource metrics
Node Exporter 9100 Host system metrics
Redis Exporter 9121 Redis metrics
Gotify 8889 Push notifications

Pre-configured Dashboards

Grafana includes 3 auto-provisioned dashboards:

  1. API Overview — HTTP request rates, latency, error rates, active sessions
  2. Infrastructure — Container CPU/memory, PostgreSQL connections, Redis memory
  3. Campaign Activity — Email queue size, campaign sends, response submissions

Custom Metrics

The API exposes 12 custom Prometheus metrics with the cm_ prefix:

  • cm_api_uptime_seconds — API uptime
  • cm_email_queue_size — BullMQ pending emails
  • cm_active_canvass_sessions — Active canvassing sessions
  • cm_locations_total — Total locations in database
  • And more — see api/src/utils/metrics.ts

Alert Rules

Pre-configured alerts in configs/prometheus/alerts.yml:

  • API down for more than 5 minutes
  • High error rate (>5% of requests returning 5xx)
  • Database connection failures
  • Redis connection failures
  • Email queue backlog
  • Disk space warnings

Upgrading

Pulling Updates

# Pull latest code
git pull origin v2

# Rebuild and restart containers
docker compose build api admin
docker compose up -d api admin

# Run any new migrations
docker compose exec api npx prisma migrate deploy

Database Migrations

Always run migrations after pulling updates:

docker compose exec api npx prisma migrate deploy

Back up first

Always run ./scripts/backup.sh before applying migrations in production. Migrations may alter table structures and are not easily reversible.


Troubleshooting Production Issues

Pangolin: 302 Redirects Instead of Content

Symptom: API returns 302 redirects to the Pangolin authentication page.

Fix: In the Pangolin dashboard, edit each resource and set Authentication to Not Protected.

CORS Errors

Symptom: Browser console shows CORS errors when accessing the production domain.

Fix: Add your production app. subdomain to CORS_ORIGINS in .env, then docker compose restart api.

Newt Won't Connect

Check in order:

  1. Credentials: Verify PANGOLIN_NEWT_ID and PANGOLIN_NEWT_SECRET in .env
  2. Endpoint: Confirm PANGOLIN_ENDPOINT matches your Pangolin server URL
  3. Logs: docker compose logs newt --tail 50
  4. Nginx running: Newt depends on nginx — docker compose ps nginx
  5. Network: Ensure outbound HTTPS is not blocked by your firewall

Services Unreachable via Tunnel

  1. Verify nginx is running: docker compose ps nginx
  2. Test locally first: curl http://localhost:4000/api/health
  3. Check nginx logs: docker compose logs nginx --tail 50
  4. Verify DNS: dig app.yourdomain.org should point to your Pangolin server

See Troubleshooting for more common issues.