Skip to content

TOML Configuration

FraiseQL uses a single fraiseql.toml file for all project configuration. Secrets (database passwords, JWT keys) stay in environment variables — the TOML file references them by name.

SectionStatus
[project]Stable
[database]Stable
[server]Stable
[server.cors]Stable
[server.tls]Stable
[fraiseql]Stable
[fraiseql.apq]Stable
[fraiseql.caching]Stable
[fraiseql.mcp]Stable
[query_defaults]Stable
[security]Stable
[security.enterprise]Stable
[security.error_sanitization]Stable
[security.rate_limiting]Stable
[security.state_encryption]Stable
[security.pkce]Stable
[security.api_keys]Stable
[security.token_revocation]Stable
[security.trusted_documents]Stable
[validation]Stable
[subscriptions]Stable
[debug]Stable
[federation]Stable
[observers]Stable
my-project/
├── fraiseql.toml
├── schema.json
└── db/

Generated by fraiseql init. The only required fields are the database URL reference and schema path.

[project]
name = "my-api"
version = "0.1.0"
database_target = "postgresql"
[database]
url = "${DATABASE_URL}"
[fraiseql]
schema_file = "schema.json"
output_file = "schema.compiled.json"

Run with:

Terminal window
DATABASE_URL="postgresql://localhost:5432/mydb" fraiseql run

Every TOML value can be overridden at runtime with a CLI flag or environment variable. CLI flags take precedence over environment variables, which take precedence over TOML.

Terminal window
# Override port at runtime without changing fraiseql.toml
fraiseql run --port 9000
# Environment variable override (FRAISEQL_ prefix, __ for nesting)
FRAISEQL_SERVER__PORT=9000 fraiseql run
TOML keyEnvironment variable
server.portFRAISEQL_SERVER__PORT
server.hostFRAISEQL_SERVER__HOST
database.urlDATABASE_URL or FRAISEQL_DATABASE__URL
database.pool_maxFRAISEQL_DATABASE__POOL_MAX

Project metadata used during compilation.

[project]
name = "my-api"
version = "1.0.0"
description = "My GraphQL API"
database_target = "postgresql"
FieldTypeDefaultDescription
namestring"my-fraiseql-app"Project identifier
versionstring"1.0.0"Semantic version
descriptionstringHuman-readable description
database_targetstringpostgresql, mysql, sqlite, sqlserver

Database connection and connection pool settings. Never put credentials directly in this file — use ${ENV_VAR} references.

[database]
url = "${DATABASE_URL}"
pool_min = 2
pool_max = 20
connect_timeout_ms = 5000
idle_timeout_ms = 600000
ssl_mode = "prefer"
FieldTypeDefaultDescription
urlstringConnection URL or env var reference
pool_mininteger2Minimum pool connections
pool_maxinteger20Maximum pool connections
connect_timeout_msinteger5000Connection acquisition timeout in ms
idle_timeout_msinteger600000Idle connection lifetime in ms
ssl_modestring"prefer"disable, allow, prefer, or require

Env var reference syntax:

url = "${DATABASE_URL}" # Required env var
url = "${DATABASE_URL:-postgresql://localhost/mydb}" # With fallback

HTTP server binding and request handling.

[server]
host = "0.0.0.0"
port = 8080
request_timeout_ms = 30000
keep_alive_secs = 75
FieldTypeDefaultDescription
hoststring"0.0.0.0"Bind address
portinteger8080Listen port
request_timeout_msinteger30000Request timeout in ms
keep_alive_secsinteger75TCP keep-alive in seconds

Cross-Origin Resource Sharing. Required when your frontend is served from a different origin.

[server.cors]
origins = ["https://app.example.com", "http://localhost:3000"]
credentials = true
FieldTypeDefaultDescription
originsarray[]Allowed origins (empty = all origins)
credentialsboolfalseAllow cookies and auth headers

TLS termination. For most deployments, terminate TLS at the load balancer and run fraiseql on plain HTTP internally.

[server.tls]
enabled = true
cert_file = "/etc/ssl/certs/server.pem"
key_file = "/etc/ssl/private/server.key"
min_version = "1.2"
FieldTypeDefaultDescription
enabledboolfalseEnable TLS
cert_filestringPath to certificate file (PEM)
key_filestringPath to private key file (PEM)
min_versionstring"1.2"Minimum TLS version: "1.2" or "1.3"

Compilation settings.

[fraiseql]
schema_file = "schema.json"
output_file = "schema.compiled.json"
FieldTypeDefaultDescription
schema_filestring"schema.json"Path to the schema definition
output_filestring"schema.compiled.json"Path for the compiled output

Automatic Persisted Queries — cache query documents by hash on the server, reducing payload size for repeated operations.

[fraiseql.apq]
enabled = true
cache_size = 1000 # Maximum number of persisted queries to cache (LRU)
FieldTypeDefaultDescription
enabledboolfalseEnable APQ support
cache_sizeinteger1000LRU cache capacity for persisted query documents

Response caching for query results. Reduces database load for read-heavy workloads.

[fraiseql.caching]
enabled = true
backend = "redis" # "memory" or "redis"
redis_url = "${REDIS_URL}" # required when backend = "redis"
max_memory_entries = 1000 # capacity when backend = "memory"
FieldTypeDefaultDescription
enabledboolfalseEnable response caching
backendstring"memory"Cache backend: "memory" (single-instance) or "redis" (distributed)
redis_urlstringRedis connection URL (required when backend = "redis")
max_memory_entriesinteger1000LRU capacity for the in-memory backend

Per-query TTL is set via cache_ttl_seconds= in the @fraiseql.query decorator. See Caching.

Alias for [security.trusted_documents] — both namespaces are accepted. Prefer [security.trusted_documents] for consistency. See Security.

Model Context Protocol server — exposes FraiseQL queries and mutations as MCP tools for AI agents (Claude Desktop, etc.).

[fraiseql.mcp]
enabled = true
transport = "http" # "http" | "stdio" | "both"
path = "/mcp" # HTTP endpoint path
require_auth = true # Require JWT/API key for MCP requests
# Restrict which operations are exposed
include = [] # Whitelist by operation name (empty = all)
exclude = [] # Blacklist by operation name
FieldTypeDefaultDescription
enabledboolfalseEnable the MCP server
transportstring"http"Transport: "http" (in-process endpoint), "stdio" (for CLI agents), or "both"
pathstring"/mcp"HTTP path for the MCP endpoint
require_authbooltrueRequire the same JWT/API key as the GraphQL endpoint
includearray[]Whitelist of operation names to expose (empty = all)
excludearray[]Blacklist of operation names to hide from MCP clients

See MCP Server for setup with Claude Desktop.


Project-wide defaults for which auto-parameters (where, order_by, limit, offset) are enabled on list queries. Per-query auto_params= overrides remain possible and are now partial — only specify the flags that differ from the project default.

Priority (lowest → highest):

  1. Built-in default: all four params enabled
  2. [query_defaults] in fraiseql.toml
  3. Per-query auto_params={"limit": True} in Python decorator
[query_defaults]
where = true # default: true
order_by = true # default: true
limit = true # default: true
offset = true # default: true
FieldTypeDefaultDescription
wherebooltrueEnable where filter argument on list queries
order_bybooltrueEnable orderBy sort argument
limitbooltrueEnable limit argument
offsetbooltrueEnable offset argument

Example — Relay-first project (disable offset pagination globally):

[query_defaults]
limit = false
offset = false
# Only this admin query re-enables limit/offset
@fraiseql.query(sql_source="v_admin_log", auto_params={"limit": True, "offset": True})
def admin_logs() -> list[AdminLog]: ...

Compile-time warnings:

  • limit = false on a non-relay list query → unbounded table scan warning
  • limit = true + order_by = false → non-deterministic pagination warning

Authorization policies enforced at runtime. JWT verification is configured via environment variables — there is no [auth] TOML section.

[security]
default_policy = "authenticated" # or "public"
[[security.rules]]
name = "owner_only"
rule = "user.id == object.owner_id"
description = "User can only access their own data"
cacheable = true
cache_ttl_seconds = 300
[[security.policies]]
name = "admin_only"
type = "rbac" # rbac | abac | custom | hybrid
roles = ["admin"]
strategy = "any" # any | all | exactly
cache_ttl_seconds = 600
[[security.field_auth]]
type_name = "User"
field_name = "email"
policy = "admin_only"
FieldTypeDefaultDescription
default_policystring"authenticated"Default access: "authenticated" or "public"

Enterprise feature flags for audit logging and legacy rate limiting.

[security.enterprise]
rate_limiting_enabled = false
auth_endpoint_max_requests = 100
auth_endpoint_window_seconds = 60
audit_logging_enabled = false
audit_log_level = "info"
FieldTypeDefaultDescription
rate_limiting_enabledboolfalseEnable built-in rate limiting
auth_endpoint_max_requestsinteger100Max auth endpoint requests per window
auth_endpoint_window_secondsinteger60Auth rate limit window in seconds
audit_logging_enabledboolfalseEnable audit logging
audit_log_levelstring"info"Audit log verbosity: "debug", "info", "warn"

Controls what error detail is returned to clients. Disabled by default — enable in production to prevent SQL errors and stack traces from reaching clients.

[security.error_sanitization]
enabled = false # opt-in
hide_implementation_details = true # maximally safe when enabled
sanitize_database_errors = true
custom_error_message = "An internal error occurred"
FieldTypeDefaultDescription
enabledboolfalseEnable error sanitization (opt-in)
hide_implementation_detailsbooltrueStrip stack traces and implementation paths
sanitize_database_errorsbooltrueReplace DB error messages with generic text
custom_error_messagestring"An internal error occurred"Message sent to clients when sanitizing

ValidationError, Forbidden, and NotFound codes pass through unchanged — clients need these to act on errors. Only InternalServerError and DatabaseError codes are sanitized.

Token bucket rate limiting applied globally, plus per-endpoint limits for auth routes. In-memory per instance (no cross-instance coordination).

[security.rate_limiting]
enabled = false # opt-in
requests_per_second = 100
requests_per_second_per_user = 1000 # default: 10× requests_per_second
burst_size = 200
trust_proxy_headers = false # set true only when behind a trusted reverse proxy
auth_start_max_requests = 5
auth_start_window_secs = 60
auth_callback_max_requests = 10
auth_callback_window_secs = 60
auth_refresh_max_requests = 20
auth_refresh_window_secs = 300
auth_logout_max_requests = 30
auth_logout_window_secs = 60
failed_login_max_attempts = 10
failed_login_lockout_secs = 900
# redis_url = "${REDIS_URL}" # requires redis-rate-limiting Cargo feature
FieldTypeDefaultDescription
enabledboolfalseEnable rate limiting (opt-in)
requests_per_secondinteger100Global token bucket refill rate
requests_per_second_per_userintegerrequests_per_second × 10Per-authenticated-user limit
burst_sizeinteger200Maximum burst above steady-state rate
trust_proxy_headersboolfalseRead X-Real-IP / X-Forwarded-For for client IP. Enable only when behind a trusted reverse proxy — spoofable otherwise
redis_urlstringRedis connection URL for distributed rate limiting (requires redis-rate-limiting feature)
auth_start_max_requestsinteger5Max /auth/start requests per window
auth_start_window_secsinteger60Window for /auth/start limit
auth_callback_max_requestsinteger10Max /auth/callback requests per window
auth_callback_window_secsinteger60Window for /auth/callback limit
auth_refresh_max_requestsinteger20Max /auth/refresh requests per window
auth_refresh_window_secsinteger300Window for /auth/refresh limit
auth_logout_max_requestsinteger30Max /auth/logout requests per window
auth_logout_window_secsinteger60Window for /auth/logout limit
failed_login_max_attemptsinteger10Failed auth attempts before lockout
failed_login_lockout_secsinteger900Lockout duration in seconds

AEAD encryption for OAuth state blobs stored between /auth/start and /auth/callback. Requires a 32-byte key as a 64-character hex string.

[security.state_encryption]
enabled = false # opt-in
algorithm = "chacha20-poly1305" # or "aes-256-gcm"
key_source = "env"
key_env = "STATE_ENCRYPTION_KEY"

Generate a key: openssl rand -hex 32

FieldTypeDefaultDescription
enabledboolfalseEnable state encryption (opt-in)
algorithmstring"chacha20-poly1305"AEAD cipher: "chacha20-poly1305" or "aes-256-gcm"
key_sourcestring"env"Key source — only "env" supported
key_envstring"STATE_ENCRYPTION_KEY"Environment variable with the 64-char hex key

PKCE (Proof Key for Code Exchange) for the OAuth /auth/start/auth/callback flow.

[security.pkce]
enabled = false # opt-in; requires state_encryption.enabled = true
code_challenge_method = "S256" # or "plain" (warns at startup in non-dev environments)
state_ttl_secs = 600
# redis_url = "${REDIS_URL}" # requires redis-pkce Cargo feature
FieldTypeDefaultDescription
enabledboolfalseEnable PKCE (opt-in)
code_challenge_methodstring"S256"Challenge method: "S256" (recommended) or "plain"
state_ttl_secsinteger600OAuth state token lifetime in seconds
redis_urlstringRedis connection URL for distributed PKCE state (requires redis-pkce feature)

API key authentication for service-to-service and CI/CD access. Keys are stored hashed — the plaintext is returned once at creation and never stored.

[security.api_keys]
enabled = true
header = "X-API-Key" # HTTP header name
hash_algorithm = "sha256" # "sha256" (fast) or "argon2" (production)
storage = "postgres" # "postgres" or "env" (static keys, CI only)
# Static keys — testing and CI only. Never in production.
[[security.api_keys.static]]
key_hash = "sha256:abc123..." # echo -n "secret" | sha256sum
scopes = ["read:*"]
name = "ci-readonly"
FieldTypeDefaultDescription
enabledboolfalseEnable API key authentication
headerstring"X-API-Key"HTTP header to read the key from
hash_algorithmstring"sha256"Hashing algorithm: "sha256" or "argon2"
storagestring"postgres"Key storage backend: "postgres" (production) or "env" (static, CI only)

For production, store hashed keys in the fraiseql_api_keys table. See Authentication for the table schema and mutation patterns.

Revoke JWTs before their exp claim expires — for logout, key rotation, or security incidents.

[security.token_revocation]
enabled = true
backend = "redis" # "redis" or "postgres"
require_jti = true # reject JWTs without a jti claim
fail_open = false # if store unreachable, deny (not allow) the request
FieldTypeDefaultDescription
enabledboolfalseEnable token revocation
backendstringStorage backend: "redis" (recommended) or "postgres"
require_jtiboolfalseReject tokens that have no jti claim
fail_openboolfalseWhen false (default), deny requests if the revocation store is unreachable. Set true to allow through on store failure

When enabled, two endpoints become available:

  • POST /auth/revoke — revoke the caller’s own token (self-logout)
  • POST /auth/revoke-all — revoke all tokens for a user (requires admin:revoke scope)

Revoked JTIs are stored until the token’s exp expires — no manual cleanup needed.

Query allowlisting — only permit GraphQL operations present in a pre-approved manifest. Useful for hardening production APIs against arbitrary query execution.

[security.trusted_documents]
enabled = true
mode = "strict" # "strict" | "permissive"
manifest_path = "./trusted-documents.json"
reload_interval_secs = 300 # 0 = no hot-reload
FieldTypeDefaultDescription
enabledboolfalseEnable trusted document enforcement
modestring"permissive""strict" rejects unknown operations; "permissive" logs them but allows through
manifest_pathstringPath to local JSON manifest (hash → query)
manifest_urlstringURL to fetch the manifest from at startup
reload_interval_secsinteger0How often to poll for manifest changes (0 = no reload)

Query depth and complexity limits enforced at parse time, before execution. Queries exceeding either limit receive a validation error — no database round-trip occurs.

[validation]
max_query_depth = 10
max_query_complexity = 100
FieldTypeDefaultDescription
max_query_depthinteger10Maximum allowed field nesting depth
max_query_complexityinteger100Maximum allowed query complexity score

Complexity is calculated by summing field weights (lists count more than scalars). Adjust thresholds based on your schema — a schema with many list fields may need a higher limit for legitimate queries.


WebSocket subscription settings. FraiseQL supports both graphql-transport-ws (modern) and graphql-ws (Apollo legacy) — protocol is negotiated from the Sec-WebSocket-Protocol header automatically.

[subscriptions]
max_subscriptions_per_connection = 10
[subscriptions.hooks]
on_connect = "https://auth.example.com/ws/connect"
on_subscribe = "https://auth.example.com/ws/subscribe"
on_disconnect = "https://auth.example.com/ws/disconnect"
timeout_ms = 500
FieldTypeDefaultDescription
max_subscriptions_per_connectionintegerunlimitedMaximum concurrent subscriptions per WebSocket connection

[subscriptions.hooks] — webhook URLs invoked during connection lifecycle:

FieldTypeDefaultDescription
on_connectstringURL to POST on connection_init. Fail-closed: connection rejected if hook returns non-2xx
on_subscribestringURL to POST before a subscription is registered. Fail-closed
on_disconnectstringURL to POST on WebSocket close. Fire-and-forget (errors ignored)
timeout_msinteger500Timeout for fail-closed hooks (on_connect, on_subscribe)

Development and diagnostics features. All debug capabilities are disabled by default and gated behind a master enabled switch. Never enable in production.

[debug]
enabled = true # master switch — required for all other flags
database_explain = true # include EXPLAIN output in /api/v1/query/explain responses
expose_sql = true # include generated SQL in explain responses (default when enabled)
FieldTypeDefaultDescription
enabledboolfalseMaster switch. All debug features require this to be true
database_explainboolfalseRun EXPLAIN against the database and include the query plan in /api/v1/query/explain responses
expose_sqlbooltrueInclude generated SQL in explain responses

Explain endpoint:

Terminal window
POST /api/v1/query/explain
Content-Type: application/json
{"query": "{ users { id name } }"}

Response includes the generated SQL and (when database_explain = true) the Postgres query plan. Use this to diagnose slow queries without enabling general query logging.


Apollo Federation v2 support and multi-database federation settings.

[federation]
enabled = true
apollo_version = 2
[[federation.entities]]
name = "User"
key_fields = ["id"]

Per-entity circuit breaker for Apollo Federation subgraph calls. Automatically opens after consecutive errors, returning 503 Service Unavailable with a Retry-After header instead of cascading to a gateway timeout.

[federation.circuit_breaker]
enabled = true
failure_threshold = 5 # open after N consecutive errors
recovery_timeout_secs = 30 # half-open probe after this many seconds
success_threshold = 2 # successes in HalfOpen needed to close
# Per-entity override
[[federation.circuit_breaker.per_database]]
database = "Order"
failure_threshold = 3
recovery_timeout_secs = 60
FieldTypeDefaultDescription
enabledbooltrueEnable the circuit breaker
failure_thresholdinteger5Consecutive errors before opening
recovery_timeout_secsinteger30Seconds before attempting a recovery probe
success_thresholdinteger2Successes in HalfOpen state before closing

[[federation.circuit_breaker.per_database]] accepts an array of per-entity overrides. Each entry requires a database key matching the federation entity name. All three threshold fields are optional (inherit from the global config if omitted).

Prometheus gauge: fraiseql_federation_circuit_breaker_state{entity="..."} (0 = Closed, 1 = Open, 2 = HalfOpen).


Event observer system. Supports in-memory, Redis, NATS, PostgreSQL, and MySQL backends.

[observers]
enabled = true
backend = "nats" # "in-memory" | "redis" | "nats" | "postgresql" | "mysql"
nats_url = "${FRAISEQL_NATS_URL}" # required when backend = "nats"
# redis_url = "${REDIS_URL}" # required when backend = "redis"
[[observers.handlers]]
event = "user.created"
target = "v_notify_new_user"
FieldTypeDefaultDescription
enabledboolfalseEnable the observer system
backendstring"in-memory"Event backend: in-memory, redis, nats, postgresql, mysql
nats_urlstringNATS server URL (required when backend = "nats"). Overridable via FRAISEQL_NATS_URL env var
redis_urlstringRedis connection URL (required when backend = "redis")

[project]
name = "blog-api"
version = "1.0.0"
description = "Blog GraphQL API"
database_target = "postgresql"
[database]
url = "${DATABASE_URL}"
pool_min = 5
pool_max = 50
connect_timeout_ms = 5000
ssl_mode = "prefer"
[server]
host = "0.0.0.0"
port = 8080
request_timeout_ms = 30000
[server.cors]
origins = ["https://blog.example.com", "http://localhost:3000"]
credentials = true
[fraiseql]
schema_file = "schema.json"
output_file = "schema.compiled.json"
[security]
default_policy = "authenticated"
[[security.policies]]
name = "admin_only"
type = "rbac"
roles = ["admin"]
strategy = "any"
[security.enterprise]
audit_logging_enabled = true
audit_log_level = "info"
[security.error_sanitization]
enabled = true
[security.rate_limiting]
enabled = true
requests_per_second = 100
burst_size = 200
auth_start_max_requests = 5
failed_login_max_attempts = 5
failed_login_lockout_secs = 900
[security.api_keys]
enabled = true
hash_algorithm = "argon2"
storage = "postgres"
[security.token_revocation]
enabled = true
backend = "redis"
require_jti = true
fail_open = false
[security.trusted_documents]
enabled = true
mode = "strict"
manifest_path = "./trusted-documents.json"
[validation]
max_query_depth = 10
max_query_complexity = 100
[subscriptions]
max_subscriptions_per_connection = 10

Run in production:

Terminal window
DATABASE_URL="postgresql://..." fraiseql run

Deployment

Deployment — production configuration