Security
Security — File access control and JWT scopes
FraiseQL handles file uploads through a dedicated multipart HTTP endpoint managed by the Rust runtime. Uploaded files are stored to a configured backend and referenced in your GraphQL schema as ordinary str URL fields — no special scalar type is needed in your Python schema definition.
File uploads in FraiseQL work as follows:
multipart/form-data POST to a dedicated upload endpoint (not to /graphql)url and metadatafn_ functionThe Python SDK has no Upload type. File URL fields in your schema are plain str.
The following backends are available:
| Backend | Provider | Feature flag |
|---|---|---|
local | Local filesystem | Always enabled |
s3 | Amazon S3 | aws-s3 feature |
gcs | Google Cloud Storage | gcs feature |
azure | Azure Blob Storage | azure-blob feature |
s3 (compatible) | Cloudflare R2, MinIO | aws-s3 feature |
s3 (compatible) | Scaleway Object Storage | aws-s3 feature |
s3 (compatible) | OVHcloud Object Storage | aws-s3 feature |
s3 (compatible) | Clever Cloud Cellar | aws-s3 feature |
s3 (compatible) | Exoscale SOS | aws-s3 feature |
s3 (compatible) | Infomaniak Swiss Backup | aws-s3 feature |
S3-compatible providers (Cloudflare R2, Scaleway, OVH, Clever Cloud, Exoscale, Infomaniak) use the s3 backend with a custom endpoint_env. See the S3-compatible configuration section below.
File storage is configured under [storage.*] (named storage backends) and [files.*] (named upload endpoints). Each upload endpoint references a storage backend by name.
# Define a named storage backend[storage.default]backend = "local"base_path = "./uploads"serve_path = "/files"
# Define a named upload endpoint that uses it[files.avatars]storage = "default"max_size = "5MB"allowed_types = ["image/jpeg", "image/png", "image/webp"]validate_magic_bytes = truepublic = trueCredentials are referenced by environment variable name — not interpolated directly into the TOML file.
# Define an S3 storage backend[storage.primary]backend = "s3"region = "us-east-1"bucket_env = "S3_BUCKET"access_key_env = "AWS_ACCESS_KEY_ID"secret_key_env = "AWS_SECRET_ACCESS_KEY"public_url = "https://my-bucket.s3.amazonaws.com"
# Define upload endpoints[files.avatars]storage = "primary"max_size = "5MB"allowed_types = ["image/jpeg", "image/png", "image/webp"]validate_magic_bytes = truepublic = true
[files.documents]storage = "primary"max_size = "50MB"allowed_types = ["application/pdf", "text/csv"]validate_magic_bytes = truepublic = falseurl_expiry = "1h"[storage.primary]backend = "gcs"bucket_env = "GCS_BUCKET"credentials_file_env = "GOOGLE_APPLICATION_CREDENTIALS"public_url = "https://storage.googleapis.com/my-bucket"[storage.primary]backend = "azure"container_env = "AZURE_CONTAINER"connection_string_env = "AZURE_STORAGE_CONNECTION_STRING"public_url = "https://myaccount.blob.core.windows.net/my-container"For S3-compatible services (MinIO, Cloudflare R2, Scaleway, OVH, Clever Cloud, Exoscale, Infomaniak), use the s3 backend with a custom endpoint:
[storage.minio]backend = "s3"bucket_env = "MINIO_BUCKET"access_key_env = "MINIO_ACCESS_KEY"secret_key_env = "MINIO_SECRET_KEY"endpoint_env = "MINIO_ENDPOINT"export MINIO_BUCKET=uploadsexport MINIO_ACCESS_KEY=minioadminexport MINIO_SECRET_KEY=minioadminexport MINIO_ENDPOINT=http://minio:9000[storage.r2]backend = "s3"bucket_env = "R2_BUCKET"access_key_env = "R2_ACCESS_KEY_ID"secret_key_env = "R2_SECRET_ACCESS_KEY"endpoint_env = "R2_ENDPOINT"public_url = "https://pub-xxxx.r2.dev"export R2_BUCKET=uploadsexport R2_ACCESS_KEY_ID=your-access-keyexport R2_SECRET_ACCESS_KEY=your-secret-keyexport R2_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com[storage.scaleway]backend = "s3"region = "fr-par"bucket_env = "SCW_BUCKET"access_key_env = "SCW_ACCESS_KEY"secret_key_env = "SCW_SECRET_KEY"endpoint_env = "SCW_ENDPOINT"export SCW_BUCKET=uploadsexport SCW_ACCESS_KEY=your-access-keyexport SCW_SECRET_KEY=your-secret-keyexport SCW_ENDPOINT=https://s3.fr-par.scw.cloud[storage.ovh]backend = "s3"region = "gra"bucket_env = "OVH_BUCKET"access_key_env = "OVH_ACCESS_KEY"secret_key_env = "OVH_SECRET_KEY"endpoint_env = "OVH_ENDPOINT"export OVH_BUCKET=uploadsexport OVH_ACCESS_KEY=your-access-keyexport OVH_SECRET_KEY=your-secret-keyexport OVH_ENDPOINT=https://s3.gra.io.cloud.ovh.netThe following fields are valid on a [files.*] section:
| Key | Type | Default | Description |
|---|---|---|---|
storage | string | "default" | Name of the [storage.*] backend to use |
path | string | /files/{name} | Upload endpoint path override |
max_size | string | "10MB" | Maximum file size ("5MB", "100KB", "1GB") |
allowed_types | array | see below | Allowed MIME types |
validate_magic_bytes | bool | true | Verify file content matches declared MIME type |
public | bool | true | Whether files are publicly accessible |
cache | string | — | Cache duration for public files (e.g., "7d") |
url_expiry | string | — | Expiry for private file signed URLs (e.g., "1h") |
scan_malware | bool | false | Enable malware scanning (requires external scanner) |
processing | section | — | Image processing configuration |
on_upload | section | — | Database callback after upload |
Default allowed MIME types (when allowed_types is not set): image/jpeg, image/png, image/webp, image/gif, application/pdf.
When uploading images, the Rust runtime can strip EXIF metadata, convert format, and generate named size variants.
[files.avatars]storage = "primary"max_size = "10MB"allowed_types = ["image/jpeg", "image/png", "image/webp"]validate_magic_bytes = true
[files.avatars.processing]strip_exif = trueoutput_format = "webp"quality = 85
[[files.avatars.processing.variants]]name = "small"width = 150height = 150mode = "fit"
[[files.avatars.processing.variants]]name = "medium"width = 400height = 400mode = "fit"Valid mode values: fit, fill, crop.
The upload response includes a variants map with URLs for each generated size.
Store file metadata in your database using the trinity pattern. The url column holds the string URL returned by the upload endpoint.
CREATE TABLE tb_file ( pk_file BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id UUID DEFAULT gen_random_uuid() UNIQUE NOT NULL, identifier TEXT UNIQUE NOT NULL, -- e.g. storage key: "avatars/uuid.webp"
original_filename TEXT NOT NULL, mime_type TEXT NOT NULL, size_bytes BIGINT NOT NULL, url TEXT NOT NULL, storage_backend TEXT NOT NULL DEFAULT 'local',
fk_user BIGINT NOT NULL REFERENCES tb_user(pk_user),
created_at TIMESTAMPTZ DEFAULT NOW());
CREATE UNIQUE INDEX idx_tb_file_id ON tb_file(id);CREATE INDEX idx_tb_file_fk_user ON tb_file(fk_user);CREATE INDEX idx_tb_file_mime_type ON tb_file(mime_type);CREATE VIEW v_file ASSELECT f.id, jsonb_build_object( 'id', f.id::text, 'identifier', f.identifier, 'original_filename', f.original_filename, 'mime_type', f.mime_type, 'size_bytes', f.size_bytes, 'url', f.url, 'storage_backend', f.storage_backend, 'created_at', f.created_at ) AS dataFROM tb_file f;CREATE OR REPLACE FUNCTION fn_create_file( p_identifier TEXT, p_original_filename TEXT, p_mime_type TEXT, p_size_bytes BIGINT, p_url TEXT, p_storage_backend TEXT, p_user_id UUID)RETURNS mutation_responseLANGUAGE plpgsqlAS $$DECLARE v_pk_user INTEGER; v_file_id UUID := gen_random_uuid(); v_result mutation_response;BEGIN SELECT pk_user INTO v_pk_user FROM tb_user WHERE id = p_user_id; IF v_pk_user IS NULL THEN v_result.status := 'failed:not_found'; v_result.message := 'User not found'; RETURN v_result; END IF;
INSERT INTO tb_file ( id, identifier, original_filename, mime_type, size_bytes, url, storage_backend, fk_user ) VALUES ( v_file_id, p_identifier, p_original_filename, p_mime_type, p_size_bytes, p_url, p_storage_backend, v_pk_user );
v_result.status := 'success'; v_result.entity_id := v_file_id; v_result.entity_type := 'File'; v_result.entity := (SELECT data FROM v_file WHERE id = v_file_id); RETURN v_result;END;$$;File URL fields are plain str in Python. There is no Upload type in the FraiseQL SDK.
import fraiseqlfrom fraiseql.scalars import ID, DateTime
@fraiseql.typeclass File: """A stored file record.""" id: ID identifier: str original_filename: str mime_type: str size_bytes: int url: str # The URL returned by the upload endpoint storage_backend: str created_at: DateTime
@fraiseql.errorclass FileError: message: str code: str
@fraiseql.inputclass CreateFileInput: identifier: str original_filename: str mime_type: str size_bytes: int url: str storage_backend: str
@fraiseql.querydef files(limit: int = 20, offset: int = 0) -> list[File]: return fraiseql.config(sql_source="v_file")
@fraiseql.querydef file(id: ID) -> File | None: return fraiseql.config(sql_source="v_file")
@fraiseql.mutation( sql_source="fn_create_file", operation="CREATE", inject={"user_id": "jwt:sub"},)def create_file(input: CreateFileInput) -> File: ...The client uploads the file first, then calls the GraphQL mutation to record it.
Send a multipart/form-data POST to the upload endpoint:
async function uploadFile(file) { const formData = new FormData(); formData.append('file', file);
const response = await fetch('/files/avatars', { method: 'POST', headers: { 'Authorization': `Bearer ${token}` }, body: formData, });
// Returns: { id, name, url, content_type, size, variants, created_at } return response.json();}import httpx
def upload_file(file_path: str, token: str) -> dict: with open(file_path, 'rb') as f: response = httpx.post( 'http://localhost:8080/files/avatars', files={'file': (file_path, f, 'image/jpeg')}, headers={'Authorization': f'Bearer {token}'}, ) return response.json() # Returns: { "id": "...", "url": "https://...", "content_type": "image/jpeg", ... }curl -X POST http://localhost:8080/files/avatars \ -H "Authorization: Bearer $TOKEN" \ -F "file=@photo.jpg"
# Response:# {# "id": "018e1234-...",# "url": "https://my-bucket.s3.amazonaws.com/avatars/018e1234-....webp",# "content_type": "image/jpeg",# "size": 102400,# "variants": { "small": "https://.../avatars/018e1234-..._small.webp" },# "created_at": "2026-03-02T10:00:00Z"# }After upload succeeds, call the mutation with the returned URL:
mutation RecordFile($input: CreateFileInput!) { createFile(input: $input) { id url originalFilename createdAt }}{ "input": { "identifier": "avatars/018e1234-....webp", "originalFilename": "photo.jpg", "mimeType": "image/jpeg", "sizeBytes": 102400, "url": "https://my-bucket.s3.amazonaws.com/avatars/018e1234-....webp", "storageBackend": "s3" }}Instead of a two-step flow, you can configure an automatic database callback that runs after each successful upload. The runtime calls the specified SQL function with the upload result.
[files.avatars.on_upload]function = "fn_create_file"mapping = { identifier = "key", original_filename = "original_filename", mime_type = "content_type", size_bytes = "size", url = "url" }For private files (public = false), the runtime generates signed URLs with a configurable expiry:
[files.documents]storage = "primary"max_size = "50MB"allowed_types = ["application/pdf"]public = falseurl_expiry = "1h"Request a signed URL via the upload endpoint:
GET /files/documents/{key}/signed-urlAuthorization: Bearer <token>Response:
{ "url": "https://bucket.s3.amazonaws.com/documents/...?X-Amz-Signature=...", "expires_at": "2026-03-02T11:00:00Z"}By default (validate_magic_bytes = true), FraiseQL reads the actual file content and verifies that the magic bytes match the declared MIME type. A file uploaded with Content-Type: image/jpeg that contains a ZIP header will be rejected.
The scan_malware = true flag enables integration with an external malware scanner via the MalwareScanner trait. No built-in scanner implementation is bundled — you must provide a custom implementation. The TOML flag activates the scanning hook; the scanner itself is registered programmatically.
[files.documents]scan_malware = trueUse a cloud backend in production. The local backend stores files on disk alongside your application. It does not support real signed URLs and is not suitable for multi-instance deployments. Use S3, GCS, Azure Blob, or an S3-compatible European provider (Scaleway, OVH, Clever Cloud, Exoscale, Infomaniak) depending on your region and compliance requirements.
Keep validate_magic_bytes = true. This is the default and should not be disabled. Accepting files based only on the declared MIME type allows malicious uploads.
Set url_expiry for sensitive documents. If files should not be publicly accessible forever, set public = false and url_expiry to a duration appropriate for your use case.
Store the storage key as identifier. Use the storage key (e.g., "avatars/018e1234-....webp") as the identifier column in tb_file. This makes the file uniquely addressable without relying solely on the UUID.
The MIME type is not in allowed_types. Either add the type to the list or check that the client is sending the correct Content-Type header.
The file exceeds max_size. Increase the limit or compress the file before uploading.
The file extension or declared content type does not match the actual file content. Ensure the file is not corrupted and that the correct MIME type is declared.
access_key_env and secret_key_env are sets3:PutObject, s3:GetObject, s3:DeleteObject on the bucketbucket_env variableFile upload via REST is possible using multipart/form-data if the rest-transport feature includes multipart support. Otherwise, file operations remain GraphQL-only. Check release notes for current multipart status.
Security
Security — File access control and JWT scopes
Deployment
Deployment — Production S3 configuration