Skip to content

Deployment Guides

Choose your deployment target based on your infrastructure preferences and scalability needs.

PlatformBest ForSetup TimeScalingCost
DockerDevelopment, small teams30 minManualLow
KubernetesEnterprise, high traffic2-4 hoursAutomaticMedium
AWS (ECS/Fargate)AWS-native, managed1-2 hoursAuto-scalingMedium-High
Google Cloud RunServerless, event-driven30 minAutomaticPay-per-use
Azure App ServiceMicrosoft ecosystem1-2 hoursAuto-scalingMedium

FraiseQL is database-first — your SQL tables, views, and functions are the source of truth. Confiture is the companion tool that manages the database lifecycle across every environment.

ScenarioConfiture command
Fresh dev or CI databaseconfiture build --env local
Apply pending schema changes to productionfraiseql migrate up --env production
Roll back last migrationfraiseql migrate down --env production
Check what’s pendingfraiseql migrate status --env production
Seed local DB with anonymised production dataconfiture sync --from prod --to local --anonymize

Every deployment platform below runs fraiseql migrate up as a pre-start step (initContainer in Kubernetes, a one-off task in ECS, a release command in Docker Compose) before FraiseQL starts serving traffic.

Before deploying to production:

  • Environment variables configured (DATABASE_URL, JWT_SECRET, etc.)
  • Database schema migrations applied (fraiseql migrate up --env production)
  • Database connection pooling configured
  • Health check endpoints enabled
  • Logging configured (structured JSON logs)
  • Monitoring/alerting setup (errors, latency, CPU)
  • CORS configured properly
  • Rate limiting enabled
  • SSL/TLS certificates valid
  • API keys/secrets stored in secrets manager (not .env)
  • Database backups configured
  • Rollback plan documented
3306/dbname
# Database connection
DATABASE_URL=postgresql://user:pass@host:5432/dbname
# or for other databases:
# DATABASE_URL=sqlite:///./data/fraiseql.db
# DATABASE_URL=mssql://user:pass@host:1433/dbname
# Authentication
JWT_SECRET=your-secret-key-min-32-chars
CORS_ORIGINS=https://example.com,https://api.example.com
# Deployment
ENVIRONMENT=production
FRAISEQL_HOST=0.0.0.0
FRAISEQL_PORT=8080
# Optional: Audit logging
AUDIT_LOG_LEVEL=info
# Optional: Rate limiting (auth endpoints)
RATE_LIMIT_AUTH_START=100
RATE_LIMIT_AUTH_CALLBACK=50
RATE_LIMIT_FAILED_LOGIN=5
# Optional: NATS (if using events/subscriptions)
NATS_URL=nats://nats-server:4222
NATS_SUBJECT_PREFIX=fraiseql.events
  • Docker/Docker Compose: Use .env file (in .gitignore), or secrets manager
  • Kubernetes: Use Secrets or external secrets operator
  • AWS: AWS Secrets Manager or Parameter Store
  • GCP: Google Secret Manager
  • Azure: Azure Key Vault

Example with AWS Secrets Manager, inject secrets as environment variables in your ECS task definition:

{
"secrets": [
{ "name": "DATABASE_URL", "valueFrom": "arn:aws:secretsmanager:us-east-1:123:secret:fraiseql/prod:DATABASE_URL::" },
{ "name": "STATE_ENCRYPTION_KEY", "valueFrom": "arn:aws:secretsmanager:us-east-1:123:secret:fraiseql/prod:STATE_ENCRYPTION_KEY::" }
]
}

FraiseQL reads secrets from environment variables at startup. There is no Python runtime to configure.

FraiseQL listens on:

  • Port 8080 (default): GraphQL endpoint

FraiseQL provides a single health check endpoint:

Terminal window
# Health check (service running and database connected)
GET /health
# Response: 200 OK with {"status":"healthy"}
# Response: 503 Service Unavailable with {"status":"unhealthy"} if database unreachable

Configure your orchestrator to use /health for both liveness and readiness checks.

FraiseQL responds to SIGTERM signal and:

  1. Stops accepting new requests
  2. Waits up to 30 seconds for in-flight requests to complete
  3. Closes database connections
  4. Exits

For production, configure connection pooling in fraiseql.toml:

[database]
url = "postgresql://user:pass@host:5432/dbname"
pool_min = 5 # Minimum connections per instance
pool_max = 20 # Maximum connections per instance

For high-traffic deployments (10k+ RPS), use an external connection pooler in front of your database.

DatabaseMinMaxPer Server
PostgreSQL520100-200
MySQL520100-200
SQLite1510
SQL Server520100-200

For high-traffic applications (10k+ RPS), use external connection pooler:

  • PostgreSQL: PgBouncer
  • MySQL: ProxySQL or MaxScale
  • SQL Server: SQL Server connection pooling (built-in)

FraiseQL outputs JSON logs (enabled by default in production):

{
"timestamp": "2024-01-15T10:30:45Z",
"level": "INFO",
"message": "Request processed",
"request_id": "abc123def456",
"method": "POST",
"path": "/graphql",
"duration_ms": 145,
"status": 200,
"user_id": "user_123"
}
Terminal window
# Override via: AUDIT_LOG_LEVEL=debug (from fraiseql.toml.example)
# Or configure in fraiseql.toml:
# [security.enterprise]
# log_level = "info" # Options: "debug", "info", "warn"
debug # Verbose: all database queries, middleware decisions
info # Default: requests, errors, important events
warn # Warnings: slow queries, connection issues
Terminal window
# In Docker: Use awslogs driver
# In Kubernetes: Use CloudWatch agent

GraphQL Queries (requests/sec)

  • Success rate
  • Error rate
  • p50, p95, p99 latency
  • Query complexity distribution

Database

  • Connection pool utilization
  • Query execution time
  • Transaction time
  • Slow queries (>500ms)

System

  • CPU usage
  • Memory usage
  • Disk space
  • Network I/O

NATS (if enabled)

  • Messages published/sec
  • Messages consumed/sec
  • Delivery errors
  • Consumer lag

FraiseQL exports Prometheus metrics on /metrics:

# Example scrape configuration
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'fraiseql'
static_configs:
- targets: ['localhost:8080']
# Alert on high error rate
- alert: HighErrorRate
expr: rate(fraiseql_errors_total[5m]) > 0.05
for: 5m
annotations:
summary: "FraiseQL error rate above 5%"
# Alert on high latency
- alert: HighLatency
expr: fraiseql_request_duration_seconds{quantile="0.95"} > 1
for: 5m
annotations:
summary: "p95 latency above 1 second"
# Alert on database connection pool exhaustion
- alert: ConnectionPoolExhausted
expr: fraiseql_db_connections_used / fraiseql_db_connections_max > 0.9
for: 2m
annotations:
summary: "Database connection pool >90% full"

All external traffic must use HTTPS:

Terminal window
# Recommended: Reverse proxy with SSL termination
# Use nginx, Caddy, or a cloud load balancer (AWS ALB, GCP GLB, Azure Application Gateway)
# FraiseQL runs on plain HTTP on port 8080 inside the network
Terminal window
# Allow specific origins only
CORS_ORIGINS=https://example.com,https://app.example.com
# Or: Allow any subdomain of example.com
# (set via code, not env var)

Protect against abuse via fraiseql.toml or environment variable overrides:

[security.rate_limiting]
enabled = true
# Per-IP limits on auth endpoints
auth_start_max_requests = 100
auth_start_window_secs = 60
auth_callback_max_requests = 50
auth_callback_window_secs = 60
# Failed login attempt limiting
failed_login_max_requests = 5
failed_login_window_secs = 3600

Environment variable overrides (from fraiseql.toml.example):

Terminal window
RATE_LIMIT_AUTH_START=100 # Override per-IP auth start limit
RATE_LIMIT_AUTH_CALLBACK=50 # Override per-IP auth callback limit
RATE_LIMIT_FAILED_LOGIN=5 # Override failed login limit

Implement key rotation schedule:

  1. Generate new API key
  2. Update client configuration (staged rollout)
  3. Wait for all clients to switch
  4. Revoke old key

For stateless FraiseQL deployment:

  1. Deploy multiple instances
  2. Use load balancer (distribute traffic)
  3. All instances connect to same database
  4. Share NATS connection for events

With 3 instances, traffic flows from the Load Balancer to Pod1, Pod2, and Pod3, all connecting to a shared PostgreSQL instance. Read replicas can be added optionally.

Recommended when:

  • Single instance can handle load (< 1000 RPS)
  • Simplifies deployment
  • Reduces operational overhead

Limits:

  • Database connection pool limits
  • Memory limits for request batching
  • Network bandwidth

Most platforms support auto-scaling based on:

  • CPU usage (target: 50-70%)
  • Memory usage (target: 60-80%)
  • Request count (scale on RPS threshold)
  • Custom metrics (database connection pool, query latency)
Terminal window
# Automated backups with WAL archiving
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > backup.sql
# Or: Use managed PostgreSQL (AWS RDS, Google Cloud SQL) for automated backups
StrategyRTORPOCost
Manual backups4 hours1 dayLow
Automated daily2 hours1 dayLow
Continuous replication5 min1 minHigh
Read replicas5 min0 minHigh

Symptom: “Too many connections” errors Solution: Increase pool_max in [database] TOML config, verify clients closing connections properly

Symptom: p99 latency > 5 seconds Solution: Check query complexity, add database indexes, analyze slow query logs

Symptom: Memory usage grows over time Solution: Check for unclosed connections, verify no circular references in schema

Symptom: Random “connection refused” errors Solution: Configure connection retry logic, check database availability, verify network connectivity

See Troubleshooting for more common issues.

fraiseql run serves GraphQL, REST, and gRPC from a single binary on a single port — no separate processes or containers per transport. Note: gRPC requires building FraiseQL with the grpc-transport Cargo feature — not included in the official Docker image. See gRPC Transport.

gRPC requires HTTP/2. Ensure your load balancer or reverse proxy supports HTTP/2 pass-through:

  • nginx: listen 443 ssl http2;
  • Kubernetes: annotate ingress with nginx.ingress.kubernetes.io/backend-protocol: "GRPC" or use an Envoy/Linkerd service mesh
  • AWS ALB: enable HTTP/2 on the target group
  • GCP Cloud Run: HTTP/2 supported natively
  • Azure Application Gateway v2: enable HTTP/2 on the listener

REST and GraphQL work with any HTTP/1.1 or HTTP/2 configuration.

Docker Deployment

Get started quickly with Docker and Docker Compose for local and small-scale production deployments. Docker Guide

AWS Deployment

Deploy on AWS using ECS/Fargate, RDS, and CloudFormation for a fully managed AWS-native setup. AWS Guide

GCP Deployment

Use Google Cloud Run or GKE for serverless and container-based deployments on Google Cloud. GCP Guide

Scaling & Performance

Learn horizontal scaling, auto-scaling, caching strategies, and database optimization. Scaling Guide