Environment Variables¶
All configuration is done through environment variables. Copy backend/.env.example to backend/.env and adjust as needed.
Required for Development¶
These must be set for the app to start:
ENVIRONMENT=development
# Database
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=sapari
POSTGRES_SERVER=db # Use 'localhost' when running outside Docker
POSTGRES_PORT=5432
# AI (at least one provider required)
OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# Auth
SECRET_KEY=insecure-secret-key-change-this-in-production
AI Settings¶
OPENAI_API_KEY=sk-... # Whisper transcription + GPT-5 (judge/fallback)
DEEPSEEK_API_KEY=sk-... # DeepSeek Reasoner (primary for false starts)
ANTHROPIC_API_KEY=sk-ant-... # Alternative LLM provider
AI_DEFAULT_MODEL=gpt-5-mini # Default model for detection
AI_TEMPERATURE=0.7
AI_MAX_TOKENS=4000
Database¶
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=sapari
POSTGRES_SERVER=db # 'db' for Docker, 'localhost' for local
POSTGRES_PORT=5432
POSTGRES_SYNC_PREFIX=postgresql://
POSTGRES_ASYNC_PREFIX=postgresql+asyncpg://
CREATE_TABLES_ON_STARTUP=true # Must be false in production (use Alembic)
Cache (Redis)¶
CACHE_ENABLED=true
CACHE_BACKEND=redis # Options: redis, memcached
DEFAULT_CACHE_EXPIRATION=3600
CACHE_REDIS_HOST=redis # 'redis' for Docker, 'localhost' for local
CACHE_REDIS_PORT=6379
CACHE_REDIS_DB=0
CACHE_REDIS_PASSWORD=
Rate Limiting¶
RATE_LIMITER_ENABLED=true
RATE_LIMITER_BACKEND=redis
DEFAULT_RATE_LIMIT_LIMIT=100
DEFAULT_RATE_LIMIT_PERIOD=60
RATE_LIMITER_REDIS_HOST=redis
RATE_LIMITER_REDIS_PORT=6379
RATE_LIMITER_REDIS_DB=1 # Separate DB from cache
TaskIQ Workers¶
TASKIQ_ENABLED=true
TASKIQ_BROKER_TYPE=rabbitmq # Options: redis, rabbitmq
# RabbitMQ broker (used when TASKIQ_BROKER_TYPE=rabbitmq)
TASKIQ_RABBITMQ_HOST=rabbitmq # 'rabbitmq' for Docker, 'localhost' for local
TASKIQ_RABBITMQ_PORT=5672
TASKIQ_RABBITMQ_USER=guest
TASKIQ_RABBITMQ_PASSWORD=guest
TASKIQ_RABBITMQ_VHOST=/
# Redis result backend (used even when broker is RabbitMQ)
TASKIQ_REDIS_HOST=redis
TASKIQ_REDIS_PORT=6379
TASKIQ_REDIS_DB=3
TASKIQ_WORKER_CONCURRENCY=2
TASKIQ_MAX_TASKS_PER_WORKER=1000
# FFmpeg resource control
FFMPEG_THREADS=2 # Max threads per FFmpeg process (limits memory on 4K content)
RabbitMQ handles task routing with native priority queues. Redis is still used as the result backend for task outcomes, plus cache, sessions, and SSE pub/sub. Both must be running.
Storage (S3-compatible)¶
We use MinIO locally and Cloudflare R2 in production.
| Variable | Description |
|---|---|
STORAGE_ENDPOINT |
S3 endpoint URL |
STORAGE_PUBLIC_ENDPOINT |
Public URL for presigned downloads (if different from internal) |
STORAGE_ACCESS_KEY_ID |
Access key (minioadmin for local MinIO) |
STORAGE_SECRET_ACCESS_KEY |
Secret key (minioadmin for local MinIO) |
STORAGE_BUCKET_RAW |
Bucket for clips, audio, proxies (sapari-raw) |
STORAGE_BUCKET_EXPORTS |
Bucket for rendered videos (sapari-exports) |
STORAGE_BUCKET_ASSETS |
Bucket for user-uploaded assets (sapari-assets) |
STORAGE_REGION |
Region (default: auto) |
STORAGE_MAX_UPLOAD_SIZE_MB |
Max single-file upload size for clips/assets via presigned URLs. Default 2048 (2 GB); production override is 5120 (5 GB) to handle high-bitrate 4K source footage. Validates the declared size in POST /clips/presign and POST /assets/presign. |
STORAGE_PRESIGNED_URL_EXPIRY |
Seconds before a presigned upload URL expires (default 3600) |
The STORAGE_PUBLIC_ENDPOINT is needed when the internal endpoint (e.g., http://minio:9000 in Docker) differs from what browsers can access (http://localhost:9000).
Media Proxy (Worker-fronted clip playback)¶
Clip playback URLs are minted as short-lived HS256 JWTs and verified by a Cloudflare Worker at /media/v1/* before streaming bytes from R2. The backend signs; the Worker verifies. Upload URLs are still presigned-direct-to-R2 (only downloads route through the Worker). See R2_MEDIA_PROXY_PLAN.md for the full architecture and docs/operations/media-token-rotation.md for the rotation runbook.
MEDIA_TOKEN_SECRET=<32+ byte random> # Distinct from SECRET_KEY — independent rotation
MEDIA_TOKEN_KID=v1 # Key id in JWT header; bumped during rotation
MEDIA_TOKEN_TTL_SECONDS=300 # 5 minutes. Frontend retry handler refetches on 401.
MEDIA_PROXY_BASE_URL=https://staging.sapari.io # Origin the Worker is bound to
Generate the secret with:
The Worker must have the same secret set via wrangler secret put MEDIA_TOKEN_SECRET_V1 --env <staging|production>. On cold start both sides log media_token: active=<kid> registry=[<kid>:<fp>] at INFO — the fingerprints must match or playback breaks.
Authentication & Security¶
SECRET_KEY=insecure-secret-key-change-this-in-production
ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7
# Session
SESSION_TIMEOUT_MINUTES=30
SESSION_BACKEND=redis
SESSION_REDIS_DB=2 # Separate DB from cache (DB 0) and rate limiting (DB 1)
MAX_SESSIONS_PER_USER=5
# CSRF
CSRF_ENABLED=true
# Login rate limiting (exponential backoff)
# Lockout duration = LOGIN_LOCKOUT_BASE_SECONDS * 2^round, capped at LOGIN_LOCKOUT_MAX_SECONDS.
# With defaults: round 0 → 1m, round 1 → 2m, then 4m, 8m, 16m, 32m, 1h floor.
LOGIN_MAX_ATTEMPTS=5
LOGIN_ATTEMPT_WINDOW_SECONDS=60
LOGIN_LOCKOUT_BASE_SECONDS=60
LOGIN_LOCKOUT_MAX_SECONDS=3600
LOGIN_ROUND_RETENTION_SECONDS=3600
# Security headers (X-Frame-Options, X-Content-Type-Options, etc.)
SECURITY_HEADERS_ENABLED=true
Application¶
OAuth¶
# Base URL for OAuth callback redirects. Must point to where the browser
# accesses the API — in local dev that's the Vite proxy (localhost:3000),
# in production it's the app domain (e.g., https://app.sapari.io).
# This ensures session cookies are set on the correct origin.
OAUTH_REDIRECT_BASE_URL=http://localhost:3000
# Google (required for Google Sign-In button — hidden when not configured)
# Get credentials from Google Cloud Console > APIs & Services > Credentials
# Authorized redirect URI: {OAUTH_REDIRECT_BASE_URL}/api/v1/auth/oauth/callback/google
OAUTH_GOOGLE_CLIENT_ID=
OAUTH_GOOGLE_CLIENT_SECRET=
# GitHub (optional — backend complete, not exposed in frontend yet)
OAUTH_GITHUB_CLIENT_ID=
OAUTH_GITHUB_CLIENT_SECRET=
Email (Postmark)¶
POSTMARK_SERVER_TOKEN=your-postmark-server-token-here
EMAIL_SENDER_ADDRESS=hello@sapari.io # Must be verified in Postmark
EMAIL_SENDER_NAME=Vitoria from Sapari
EMAIL_ENABLED=true
# For local dev without Postmark, save emails to disk instead:
# EMAIL_TEST_MODE=true
# EMAIL_TEST_OUTPUT_DIR=./email_previews
CORS & Web Server¶
CORS_ENABLED=true
CORS_ORIGINS=http://localhost:3000 # Default for dev; production with same-domain proxy needs no override
GZIP_ENABLED=true
GZIP_MINIMUM_SIZE=1000
Observability (Logfire)¶
LOGFIRE_ENABLED=true
LOGFIRE_TOKEN=your-logfire-token-here
# Web process uses `sapari-api`; workers override at startup with per-broker names.
LOGFIRE_SERVICE_NAME=sapari-api
LOGFIRE_ENVIRONMENT=development
# Instrumentation toggles (web process)
LOGFIRE_INSTRUMENT_FASTAPI=true
LOGFIRE_INSTRUMENT_SQLALCHEMY=true
LOGFIRE_INSTRUMENT_REDIS=true
LOGFIRE_INSTRUMENT_PYDANTIC_AI=false
LOGFIRE_INSTRUMENT_SYSTEM_METRICS=true
# Worker-process instrumentation kill switches.
# TaskIQ worker startup wires pydantic-ai + sqlalchemy + redis. pydantic-ai is
# always-on (agentic pipeline visibility). sqlalchemy + redis in workers are
# high-volume surfaces — flip either off via `.env` + docker compose restart
# if post-deploy Logfire volume runs hot. Web process is unaffected.
LOGFIRE_INSTRUMENT_SQLALCHEMY_WORKERS=true
LOGFIRE_INSTRUMENT_REDIS_WORKERS=true
Stripe (Billing)¶
STRIPE_ENABLED=true
STRIPE_SECRET_KEY=sk_test_... # From dashboard.stripe.com/test/apikeys
STRIPE_PUBLISHABLE_KEY=pk_test_... # From dashboard.stripe.com/test/apikeys
STRIPE_WEBHOOK_SECRET=whsec_... # From `stripe listen --forward-to localhost:8000/api/v1/webhooks/stripe`
STRIPE_TEST_MODE=true # Default is false; set true for development with test keys
STRIPE_CURRENCY=usd
# Alert email for chargebacks (falls back to ADMIN_EMAIL → CONTACT_EMAIL)
CHARGEBACK_ALERT_EMAIL=
Local Stripe setup:
# 1. Install Stripe CLI
brew install stripe/stripe-cli/stripe
# 2. Login
stripe login
# 3. Forward webhooks (keep running in separate terminal)
stripe listen --forward-to localhost:8000/api/v1/webhooks/stripe
# 4. Stripe products are seeded automatically on `docker compose up`
# (if STRIPE_SECRET_KEY is configured in .env)
Application Metadata¶
All Settings¶
The complete settings class is in backend/src/infrastructure/config/settings.py. Every setting has a default that works for local development.