Deployment¶
Sapari v1.0 deploys to self-managed Hetzner servers via Docker Compose, with Cloudflare Pages serving the frontend and landing. See Infrastructure Architecture for the full picture.
Infrastructure Overview (v1.0)¶
| Component | Service |
|---|---|
| Backend API + 6 workers + scheduler | Hetzner CCX23 (production) / CX33 (staging), Docker Compose |
| Frontend + Landing | Cloudflare Pages (auto-deploys from main/staging branches) |
| TLS + reverse proxy | Caddy 2 with Cloudflare DNS-01 challenge |
| API proxy to backend | Cloudflare Worker (makes frontend + API same-origin) |
| Database | Neon Postgres (two projects: sapari-staging, sapari-production) |
| Message Broker | RabbitMQ 3 (self-hosted on the same server) |
| Cache + Sessions + SSE | Redis 7 (self-hosted on the same server) |
| Object Storage | Cloudflare R2 (6 buckets: 3 per env) |
| Observability | Logfire (traces + structured logs) |
Docker Images¶
One image, built from the prod target of backend/Dockerfile, contains everything: API, workers, scheduler, migrations, seed scripts.
Built by GitHub Actions and pushed to GHCR on every push to staging or main:
ghcr.io/<org>/sapari-backend:<sha> # Immutable, for rollback
ghcr.io/<org>/sapari-backend:staging # Floating, latest staging build
ghcr.io/<org>/sapari-backend:production # Floating, latest production build
Each image carries a sapari.alembic_head label so rollback.sh can detect migration mismatches.
Deploy via Scripts¶
All deployment is driven by scripts in scripts/deployment/. CD calls them; operators SSH'd into the server call the same scripts.
| What | Command |
|---|---|
| Standard deploy (pull, migrate, restart, health check) | ./scripts/deployment/deploy.sh |
| Roll back to a specific SHA | ./scripts/deployment/rollback.sh <sha> |
| Restart a service (config changed, stuck worker) | ./scripts/deployment/restart.sh [service] |
| Run a one-off Python task | ./scripts/deployment/run-task.sh <script> |
| Check health | ./scripts/deployment/health.sh |
See scripts/README.md for the full operator cheatsheet, or scripts.md for in-depth documentation of each script.
First-time Server Setup¶
Order a fresh Hetzner box, then:
# As root:
sudo ./scripts/deployment/setup-server.sh --my-ip <YOUR_IP>
# As deploy user (after clone + .env):
./scripts/deployment/first-deploy.sh
Environment Variables¶
Each server has one /home/deploy/sapari/.env file (not committed). Copy from backend/.env.production.example and fill in. The production security validator blocks startup if critical values are misconfigured (weak SECRET_KEY, CREATE_TABLES_ON_STARTUP=true, etc.).
Key groups:
- Required: SECRET_KEY, DATABASE_URL, STRIPE_*, POSTMARK_SERVER_TOKEN, STORAGE_*, OPENAI_API_KEY, DEEPSEEK_API_KEY, OAUTH_*, ADMIN_*, TASKIQ_RABBITMQ_USER/PASSWORD, CACHE_REDIS_PASSWORD, CLOUDFLARE_API_TOKEN
- Must override from defaults: ENVIRONMENT=production|staging, CREATE_TABLES_ON_STARTUP=false, STRIPE_TEST_MODE=false (prod only), FRONTEND_URL, CORS_ORIGINS, CACHE_BACKEND=redis, TASKIQ_BROKER_TYPE=rabbitmq
Scaling¶
v1.0 is single-host vertical scaling -- resize the Hetzner box when resources get tight. Per-worker concurrency is controlled by TASKIQ_WORKER_CONCURRENCY in the env file (default: 1).
For horizontal scaling, see the architecture roadmap (v1.2 splits API + workers; v2.0 adds k3s + GPU).
Health Checks¶
GET /health Liveness check (returns immediately)
GET /health/ready Readiness check (DB + Redis + RabbitMQ + storage)
The admin panel's System Health page shows component status, server resources, and queue depths, auto-refreshing every 10s.
Key Files¶
| Component | Location |
|---|---|
Dockerfile (single image, prod target) |
backend/Dockerfile |
| Production compose | docker-compose.prod.yml |
| Caddy image + config | caddy/Dockerfile, caddy/Caddyfile |
| Deployment scripts | scripts/deployment/ |
| Operator cheatsheet | scripts/README.md |
| Production env template | backend/.env.production.example |
| Settings | backend/src/infrastructure/config/settings.py |