Docker Deployment
Get started quickly with Docker and Docker Compose for local and small-scale production deployments. Docker Guide
Choose your deployment target based on your infrastructure preferences and scalability needs.
| Platform | Best For | Setup Time | Scaling | Cost |
|---|---|---|---|---|
| Docker | Development, small teams | 30 min | Manual | Low |
| Kubernetes | Enterprise, high traffic | 2-4 hours | Automatic | Medium |
| AWS (ECS/Fargate) | AWS-native, managed | 1-2 hours | Auto-scaling | Medium-High |
| Google Cloud Run | Serverless, event-driven | 30 min | Automatic | Pay-per-use |
| Azure App Service | Microsoft ecosystem | 1-2 hours | Auto-scaling | Medium |
FraiseQL is database-first — your SQL tables, views, and functions are the source of truth. Confiture is the companion tool that manages the database lifecycle across every environment.
| Scenario | Confiture command |
|---|---|
| Fresh dev or CI database | confiture build --env local |
| Apply pending schema changes to production | fraiseql migrate up --env production |
| Roll back last migration | fraiseql migrate down --env production |
| Check what’s pending | fraiseql migrate status --env production |
| Seed local DB with anonymised production data | confiture sync --from prod --to local --anonymize |
Every deployment platform below runs fraiseql migrate up as a pre-start step (initContainer in Kubernetes, a one-off task in ECS, a release command in Docker Compose) before FraiseQL starts serving traffic.
Before deploying to production:
fraiseql migrate up --env production)# Database connectionDATABASE_URL=postgresql://user:pass@host:5432/dbname# or for other databases:# DATABASE_URL=sqlite:///./data/fraiseql.db# DATABASE_URL=mssql://user:pass@host:1433/dbname
# AuthenticationJWT_SECRET=your-secret-key-min-32-charsCORS_ORIGINS=https://example.com,https://api.example.com
# DeploymentENVIRONMENT=productionFRAISEQL_HOST=0.0.0.0FRAISEQL_PORT=8080
# Optional: Audit loggingAUDIT_LOG_LEVEL=info
# Optional: Rate limiting (auth endpoints)RATE_LIMIT_AUTH_START=100RATE_LIMIT_AUTH_CALLBACK=50RATE_LIMIT_FAILED_LOGIN=5
# Optional: NATS (if using events/subscriptions)NATS_URL=nats://nats-server:4222NATS_SUBJECT_PREFIX=fraiseql.events.env file (in .gitignore), or secrets managerExample with AWS Secrets Manager, inject secrets as environment variables in your ECS task definition:
{ "secrets": [ { "name": "DATABASE_URL", "valueFrom": "arn:aws:secretsmanager:us-east-1:123:secret:fraiseql/prod:DATABASE_URL::" }, { "name": "STATE_ENCRYPTION_KEY", "valueFrom": "arn:aws:secretsmanager:us-east-1:123:secret:fraiseql/prod:STATE_ENCRYPTION_KEY::" } ]}FraiseQL reads secrets from environment variables at startup. There is no Python runtime to configure.
FraiseQL listens on:
FraiseQL provides a single health check endpoint:
# Health check (service running and database connected)GET /health# Response: 200 OK with {"status":"healthy"}# Response: 503 Service Unavailable with {"status":"unhealthy"} if database unreachableConfigure your orchestrator to use /health for both liveness and readiness checks.
FraiseQL responds to SIGTERM signal and:
For production, configure connection pooling in fraiseql.toml:
[database]url = "postgresql://user:pass@host:5432/dbname"pool_min = 5 # Minimum connections per instancepool_max = 20 # Maximum connections per instanceFor high-traffic deployments (10k+ RPS), use an external connection pooler in front of your database.
| Database | Min | Max | Per Server |
|---|---|---|---|
| PostgreSQL | 5 | 20 | 100-200 |
| MySQL | 5 | 20 | 100-200 |
| SQLite | 1 | 5 | 10 |
| SQL Server | 5 | 20 | 100-200 |
For high-traffic applications (10k+ RPS), use external connection pooler:
FraiseQL outputs JSON logs (enabled by default in production):
{ "timestamp": "2024-01-15T10:30:45Z", "level": "INFO", "message": "Request processed", "request_id": "abc123def456", "method": "POST", "path": "/graphql", "duration_ms": 145, "status": 200, "user_id": "user_123"}# Override via: AUDIT_LOG_LEVEL=debug (from fraiseql.toml.example)# Or configure in fraiseql.toml:# [security.enterprise]# log_level = "info" # Options: "debug", "info", "warn"debug # Verbose: all database queries, middleware decisionsinfo # Default: requests, errors, important eventswarn # Warnings: slow queries, connection issues# In Docker: Use awslogs driver# In Kubernetes: Use CloudWatch agent# Automatic if running on Cloud Run# Manual setup for VMs# Automatic if running on App Service# Container Insights for KubernetesGraphQL Queries (requests/sec)
Database
System
NATS (if enabled)
FraiseQL exports Prometheus metrics on /metrics:
# Example scrape configurationglobal: scrape_interval: 15s
scrape_configs: - job_name: 'fraiseql' static_configs: - targets: ['localhost:8080']# Alert on high error rate- alert: HighErrorRate expr: rate(fraiseql_errors_total[5m]) > 0.05 for: 5m annotations: summary: "FraiseQL error rate above 5%"
# Alert on high latency- alert: HighLatency expr: fraiseql_request_duration_seconds{quantile="0.95"} > 1 for: 5m annotations: summary: "p95 latency above 1 second"
# Alert on database connection pool exhaustion- alert: ConnectionPoolExhausted expr: fraiseql_db_connections_used / fraiseql_db_connections_max > 0.9 for: 2m annotations: summary: "Database connection pool >90% full"All external traffic must use HTTPS:
# Recommended: Reverse proxy with SSL termination# Use nginx, Caddy, or a cloud load balancer (AWS ALB, GCP GLB, Azure Application Gateway)# FraiseQL runs on plain HTTP on port 8080 inside the network# Allow specific origins onlyCORS_ORIGINS=https://example.com,https://app.example.com
# Or: Allow any subdomain of example.com# (set via code, not env var)Protect against abuse via fraiseql.toml or environment variable overrides:
[security.rate_limiting]enabled = true
# Per-IP limits on auth endpointsauth_start_max_requests = 100auth_start_window_secs = 60auth_callback_max_requests = 50auth_callback_window_secs = 60
# Failed login attempt limitingfailed_login_max_requests = 5failed_login_window_secs = 3600Environment variable overrides (from fraiseql.toml.example):
RATE_LIMIT_AUTH_START=100 # Override per-IP auth start limitRATE_LIMIT_AUTH_CALLBACK=50 # Override per-IP auth callback limitRATE_LIMIT_FAILED_LOGIN=5 # Override failed login limitImplement key rotation schedule:
For stateless FraiseQL deployment:
With 3 instances, traffic flows from the Load Balancer to Pod1, Pod2, and Pod3, all connecting to a shared PostgreSQL instance. Read replicas can be added optionally.
Recommended when:
Limits:
Most platforms support auto-scaling based on:
# Automated backups with WAL archivingpg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > backup.sql# Or: Use managed PostgreSQL (AWS RDS, Google Cloud SQL) for automated backups# Binary log backupsmysqldump -h $DB_HOST -u $DB_USER -p $DB_PASS $DB_NAME > backup.sql# Or: Use managed MySQL (AWS RDS, Google Cloud SQL)# File-based backupcp /data/fraiseql.db /backups/fraiseql-$(date +%Y%m%d).db# Keep 7-30 days of rolling backups| Strategy | RTO | RPO | Cost |
|---|---|---|---|
| Manual backups | 4 hours | 1 day | Low |
| Automated daily | 2 hours | 1 day | Low |
| Continuous replication | 5 min | 1 min | High |
| Read replicas | 5 min | 0 min | High |
Symptom: “Too many connections” errors
Solution: Increase pool_max in [database] TOML config, verify clients closing connections properly
Symptom: p99 latency > 5 seconds Solution: Check query complexity, add database indexes, analyze slow query logs
Symptom: Memory usage grows over time Solution: Check for unclosed connections, verify no circular references in schema
Symptom: Random “connection refused” errors Solution: Configure connection retry logic, check database availability, verify network connectivity
See Troubleshooting for more common issues.
fraiseql run serves GraphQL, REST, and gRPC from a single binary on a single port — no separate processes or containers per transport. Note: gRPC requires building FraiseQL with the grpc-transport Cargo feature — not included in the official Docker image. See gRPC Transport.
gRPC requires HTTP/2. Ensure your load balancer or reverse proxy supports HTTP/2 pass-through:
listen 443 ssl http2;nginx.ingress.kubernetes.io/backend-protocol: "GRPC" or use an Envoy/Linkerd service meshREST and GraphQL work with any HTTP/1.1 or HTTP/2 configuration.
Docker Deployment
Get started quickly with Docker and Docker Compose for local and small-scale production deployments. Docker Guide
AWS Deployment
Deploy on AWS using ECS/Fargate, RDS, and CloudFormation for a fully managed AWS-native setup. AWS Guide
GCP Deployment
Use Google Cloud Run or GKE for serverless and container-based deployments on Google Cloud. GCP Guide
Scaling & Performance
Learn horizontal scaling, auto-scaling, caching strategies, and database optimization. Scaling Guide