Docker Deployment
Get started quickly with Docker and Docker Compose for local and small-scale production deployments. Docker Guide
Choose your deployment target based on your infrastructure preferences and scalability needs.
| Platform | Best For | Setup Time | Scaling | Cost |
|---|---|---|---|---|
| Docker | Development, small teams | 30 min | Manual | Low |
| Kubernetes | Enterprise, high traffic | 2-4 hours | Automatic | Medium |
| AWS (ECS/Fargate) | AWS-native, managed | 1-2 hours | Auto-scaling | Medium-High |
| Google Cloud Run | Serverless, event-driven | 30 min | Automatic | Pay-per-use |
| Azure App Service | Microsoft ecosystem | 1-2 hours | Auto-scaling | Medium |
Before deploying to production:
# Database connectionDATABASE_URL=postgresql://user:pass@host:5432/dbname# or for other databases:# DATABASE_URL=sqlite:///./data/fraiseql.db# DATABASE_URL=mssql://user:pass@host:1433/dbname
# AuthenticationJWT_SECRET=your-secret-key-min-32-charsCORS_ORIGINS=https://example.com,https://api.example.com
# DeploymentENVIRONMENT=productionFRAISEQL_HOST=0.0.0.0FRAISEQL_PORT=8000
# Optional: LoggingLOG_LEVEL=infoLOG_FORMAT=json
# Optional: Rate limitingRATE_LIMIT_REQUESTS=1000RATE_LIMIT_WINDOW_MINUTES=1
# Optional: NATS (if using events/subscriptions)NATS_URL=nats://nats-server:4222NATS_SUBJECT_PREFIX=fraiseql.events.env file (in .gitignore), or secrets managerExample with AWS Secrets Manager:
import boto3import json
def get_secrets(secret_name: str) -> dict: client = boto3.client('secretsmanager') response = client.get_secret_value(SecretId=secret_name) return json.loads(response['SecretString'])
# In your main.py:secrets = get_secrets('fraiseql/prod')app.config.jwt_secret = secrets['JWT_SECRET']app.config.database_url = secrets['DATABASE_URL']FraiseQL listens on:
FraiseQL provides health check endpoints:
# Liveness check (is the service running?)GET /health/live# Response: 200 OK
# Readiness check (is it ready to serve traffic?)GET /health/ready# Response: 200 OK if database connected, else 503 Service UnavailableConfigure your orchestrator to use these endpoints for health checks.
FraiseQL responds to SIGTERM signal and:
For production, FraiseQL automatically configures connection pooling:
# Configuration via environment variablesPGBOUNCER_MIN_POOL_SIZE=5 # Minimum connections per databasePGBOUNCER_MAX_POOL_SIZE=20 # Maximum connections per databasePGBOUNCER_CONNECTION_TIMEOUT=30 # Seconds| Database | Min | Max | Per Server |
|---|---|---|---|
| PostgreSQL | 5 | 20 | 100-200 |
| MySQL | 5 | 20 | 100-200 |
| SQLite | 1 | 5 | 10 |
| SQL Server | 5 | 20 | 100-200 |
For high-traffic applications (10k+ RPS), use external connection pooler:
FraiseQL outputs JSON logs (enabled by default in production):
{ "timestamp": "2024-01-15T10:30:45Z", "level": "INFO", "message": "Request processed", "request_id": "abc123def456", "method": "POST", "path": "/graphql", "duration_ms": 145, "status": 200, "user_id": "user_123"}# Environment: FRAISEQL_LOG_LEVELDEBUG # Verbose: all database queries, middleware decisionsINFO # Default: requests, errors, important eventsWARN # Warnings: slow queries, connection issuesERROR # Errors only: failures, exceptionsFATAL # Critical issues only# In Docker: Use awslogs driver# In Kubernetes: Use CloudWatch agent# Automatic if running on Cloud Run# Manual setup for VMs# Automatic if running on App Service# Container Insights for KubernetesGraphQL Queries (requests/sec)
Database
System
NATS (if enabled)
FraiseQL exports Prometheus metrics on /metrics:
# Example scrape configurationglobal: scrape_interval: 15s
scrape_configs: - job_name: 'fraiseql' static_configs: - targets: ['localhost:9000']# Alert on high error rate- alert: HighErrorRate expr: rate(fraiseql_errors_total[5m]) > 0.05 for: 5m annotations: summary: "FraiseQL error rate above 5%"
# Alert on high latency- alert: HighLatency expr: fraiseql_request_duration_seconds{quantile="0.95"} > 1 for: 5m annotations: summary: "p95 latency above 1 second"
# Alert on database connection pool exhaustion- alert: ConnectionPoolExhausted expr: fraiseql_db_connections_used / fraiseql_db_connections_max > 0.9 for: 2m annotations: summary: "Database connection pool >90% full"All external traffic must use HTTPS:
# Option 1: Reverse proxy (recommended)# Use nginx, Caddy, or cloud load balancer with SSL termination
# Option 2: Application-level TLSFRAISEQL_TLS_CERT=/etc/tls/cert.pemFRAISEQL_TLS_KEY=/etc/tls/key.pemFRAISEQL_TLS_PORT=8443# Allow specific origins onlyCORS_ORIGINS=https://example.com,https://app.example.com
# Or: Allow any subdomain of example.com# (set via code, not env var)Protect against abuse:
# Global rate limitRATE_LIMIT_REQUESTS=10000RATE_LIMIT_WINDOW_SECONDS=60
# Per-user rate limit (requires auth)PER_USER_RATE_LIMIT=100PER_USER_RATE_LIMIT_WINDOW_SECONDS=60
# Per-IP rate limit (unauthenticated)PER_IP_RATE_LIMIT=50PER_IP_RATE_LIMIT_WINDOW_SECONDS=60Implement key rotation schedule:
For stateless FraiseQL deployment:
With 3 instances, traffic flows from the Load Balancer to Pod1, Pod2, and Pod3, all connecting to a shared PostgreSQL instance. Read replicas can be added optionally.
Recommended when:
Limits:
Most platforms support auto-scaling based on:
# Automated backups with WAL archivingpg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > backup.sql# Or: Use managed PostgreSQL (AWS RDS, Google Cloud SQL) for automated backups# Binary log backupsmysqldump -h $DB_HOST -u $DB_USER -p $DB_PASS $DB_NAME > backup.sql# Or: Use managed MySQL (AWS RDS, Google Cloud SQL)# File-based backupcp /data/fraiseql.db /backups/fraiseql-$(date +%Y%m%d).db# Keep 7-30 days of rolling backups| Strategy | RTO | RPO | Cost |
|---|---|---|---|
| Manual backups | 4 hours | 1 day | Low |
| Automated daily | 2 hours | 1 day | Low |
| Continuous replication | 5 min | 1 min | High |
| Read replicas | 5 min | 0 min | High |
Symptom: “Too many connections” errors
Solution: Increase PGBOUNCER_MAX_POOL_SIZE, verify clients closing connections properly
Symptom: p99 latency > 5 seconds Solution: Check query complexity, add database indexes, analyze slow query logs
Symptom: Memory usage grows over time Solution: Check for unclosed connections, verify no circular references in schema
Symptom: Random “connection refused” errors Solution: Configure connection retry logic, check database availability, verify network connectivity
See Troubleshooting for more common issues.
Docker Deployment
Get started quickly with Docker and Docker Compose for local and small-scale production deployments. Docker Guide
AWS Deployment
Deploy on AWS using ECS/Fargate, RDS, and CloudFormation for a fully managed AWS-native setup. AWS Guide
GCP Deployment
Use Google Cloud Run or GKE for serverless and container-based deployments on Google Cloud. GCP Guide
Scaling & Performance
Learn horizontal scaling, auto-scaling, caching strategies, and database optimization. Scaling Guide