🍓 Explicit GraphQL Framework
The Explicit GraphQL Framework
for the LLM Era
Simple patterns. High performance. Database at the center.
No N+1 queries. No ORM. Fast responses. Patterns you—and your AI assistant—actually understand.
query GetUser($id: ID!) {
user(id: $id) {
id
identifier
name
posts {
id
identifier
title
}
}
} SELECT data FROM v_user
WHERE id = $1
-- Pre-computed nested data
-- Sub-millisecond response Why FraiseQL?
Zero N+1 by Design
Your database composes nested data once. Views contain all relations. One query regardless of GraphQL complexity. No DataLoader needed.
Learn more →One Query Per Request
Simple patterns that LLMs understand. No resolver chains to trace. Validation, business logic, and data access live in SQL functions — one place, in the database.
Learn more →Security by Architecture
Views define exactly what's exposed. Impossible to accidentally leak sensitive fields. What's not in the view cannot be queried.
Learn more →Simplified Infrastructure
Table views eliminate Redis. Cascade eliminates refetch queries. Built-in audit trails. Fewer services, simpler ops.
Learn more →AI-Native Development
Mutations map to SQL functions. Validation, business logic, and side effects live in the database — not scattered across resolver chains. LLMs generate less, understand more.
Learn more →Database Agnostic
PostgreSQL, MySQL, SQLite, SQL Server. Write once, deploy anywhere. A single compiled schema runs unchanged across all four databases.
Learn more →What Experts Say
"That's a clever concept — let the database do the hard stuff and the API layer can be very lightweight and carefree. And the SQL is straightforward. Indeed a clever and robust concept!"
Principal Business Engineer & Co-Founder at 45level
BARC Fellow & Senior Analyst (Database Architecture)
"Ça a l'air appétissant 🍓😋!"
Android Engineer at Apollo GraphQL
GraphQL Technical Steering Committee
"10x faster than our current solution and faster than the REST frameworks we tested"
Co-Founder & CTO at Wiremind
Data Platform Leadership
The Trade-Off We Made
Storage Over Compute
Pre-computed views cost 2-4× storage for typical schemas. But storage is cheap; CPU cycles on every read are not.
In exchange: single fast query per read, no JOINs on hot path, no ORM, no object hydration.
Views can be fully rebuilt from base tables anytime—they don't need separate backups.
When NOT to Use FraiseQL
- Read-light, write-heavy — view sync adds overhead if you rarely query
- Massive data, few users — if storage cost dominates your bill, reconsider
- Simple CRUD — if your queries are already fast, you don't need this
✓ Best for: Read-heavy APIs, real-time dashboards, multi-tenant SaaS
Quick Start
Install
# Schema authoring (Python)
pip install fraiseql
# Runtime & CLI (Rust binary)
cargo install fraiseql-cli Define Schema
import fraiseql
@fraiseql.type
class User:
id: ID
name: str
@fraiseql.query(sql_source="v_user")
def users(limit: int = 10) -> list[User]:
pass
fraiseql.export_schema("schema.json") Run
# Export schema.json from Python
python schema.py
# Compile + serve (hot-reload with --watch)
fraiseql-cli run \
--database postgresql://localhost/db Honest Performance Comparison
What Are We Comparing Against?
Naive ORM
N+1 queries, full object hydration, no caching. The "it works" first pass.
Basic Python
Single JOINed query, json.dumps serialization. Competent code, no optimization.
Optimized Stack
DataLoaders, Redis, DTOs, fast serialization. Production-tuned by senior devs.
FraiseQL
Pre-computed views, single query per read. Database-first architecture.
| Scenario | Response Time | Queries | vs FraiseQL |
|---|---|---|---|
| Naive ORM N+1 queries | ~300ms | ~100 | 100-300x slower |
| Basic Python JOINs + serialization | ~30ms | ~1 | 20-60x slower |
| Optimized Stack DataLoaders, Redis, DTOs | ~3ms (cached) | ~1 | 2-3x slower |
| FraiseQL Views + single query | <5ms cold / sub-ms cache | 1 | baseline |
The 2-3x advantage vs optimized stacks comes with simpler ops: no Redis, no cache invalidation logic, no DataLoader boilerplate. A single database query you understand completely.
Independent benchmarks via VelocityBench — publication in progress.