🍓 Python GraphQL Framework
Simple patterns. High performance. PostgreSQL at the center.
No N+1 queries. No ORM. Rust-fast responses. Patterns you—and your AI assistant—actually understand.
{
users {
id
name
posts {
title
comments {
text
}
}
}
}
SELECT data FROM tv_user
-- Pre-computed JSONB with nested posts
-- 0.5-1ms response time
Rust performs field selection, snake_case → camelCase transformation, and streams JSON to clients. No ORM, no Python in the read path, no object hydration.
PostgreSQL composes nested data once in JSONB. Views contain all relations. One query regardless of GraphQL complexity. DataLoader is unnecessary.
Mutations are self-contained PostgreSQL functions — LLMs see validation, business logic, and sync in one place. No resolver chains to trace. tb_entity_change_log captures every side effect explicitly.
JSONB views define exactly what's exposed. Impossible to accidentally leak sensitive fields. No ORM over-fetching. What's not in the view cannot be queried.
Table views eliminate Redis. Cascade eliminates refetch queries. Built-in audit trails (tb_entity_change_log). Fewer services, simpler ops.
Native pgvector integration with 9+ distance operators. LangChain and LlamaIndex support built-in. Build semantic search and RAG systems with GraphQL.
"That's a clever concept — let the database do the hard stuff and the API layer can be very lightweight and carefree. And the SQL is straightforward. Indeed a clever and robust concept!"
"I've reviewed your Rust and Python code, and it's clear you're operating at a very high level. Great craftsmanship and attention to detail."
Principal Business Engineer & Co-Founder at 45level
Fellow & Senior Analyst at BARC
Projection tables pre-compute and store denormalized JSONB data. This costs 2-4× storage for typical schemas. Projection tables can be fully rebuilt from base tables—they don't need separate backups.
In exchange, reads become a single SELECT data FROM tv_{entity} query with no JOINs on the hot path. Compute savings come from one simple query per read, no ORM, no Python.
Storage is cheap; CPU cycles on every read are not.
✓ Best for: Read-heavy APIs, real-time dashboards, multi-tenant SaaS
pip install fraiseql
import fraiseql
@fraiseql.type
class User:
id: int
name: str
@fraiseql.query
async def users(info) -> list[User]:
return await info.context.repo.fetch()
from fraiseql.fastapi import FraiseQLConfig, create_fraiseql_app
config = FraiseQLConfig(database_url="postgresql://localhost/db")
app = create_fraiseql_app(types=[User, Post], config=config)
N+1 queries, full object hydration, no caching. The "it works" first pass.
Single JOINed query, json.dumps serialization. Competent code, no optimization.
DataLoaders, Redis, Pydantic DTOs, orjson. Production-tuned by senior devs.
Table views, Rust pipeline, single query per read. Database-first architecture.
| Scenario | Response Time | QPS/vCPU | vs FraiseQL |
|---|---|---|---|
| Naive ORM N+1 queries, no optimization |
~300ms | ~3 | 100-300x slower |
| Basic Python JOINs, standard serialization |
~30ms | ~30 | 20-60x slower |
| Optimized Stack DataLoaders, Redis, DTOs |
~3ms (cached) | ~250 | 2-3x slower |
| FraiseQL Table views, Rust pipeline |
~0.5-1ms | ~500-600 | baseline |
Performance varies by baseline. Against naive ORM patterns (N+1 queries), expect 100-300x improvement. Against basic Python with JOINs, expect 20-60x improvement. Against fully optimized stacks with Redis and DataLoaders, expect 2-3x improvement—but with simpler architecture.
FraiseQL's advantage is primarily in reads. Write performance (~3.5-5.5ms) is similar to optimized stacks due to table view sync overhead. For write-heavy workloads (>50% writes), evaluate carefully.
Read performance based on nested query (1 user + 10 posts + 50 comments). Benchmarks run on
a 10-year-old dual-core laptop—expect higher throughput on modern hardware.
Key insight: The 2-3x advantage vs optimized stacks comes with simpler ops—no Redis, no cache invalidation logic, no DataLoader boilerplate.
Reproduce these benchmarks →
Replace caching infrastructure with pre-computed table views. Fewer moving parts, lower costs.
| Traditional Stack | FraiseQL Stack | Annual Savings |
|---|---|---|
| PostgreSQL: $50/mo | PostgreSQL: $50-80/mo (more storage) | -$360/yr |
| Redis Cloud: $50-500/mo | ✅ Not needed | $600-6,000/yr |
| Extra compute (4 instances) | ✅ 1-2 instances | $1,200-2,500/yr |
| Total: $350-1,200/mo | Total: $120-250/mo | $2,800-11,400/yr |
Savings depend on your current optimization level. If you're already running Redis and multiple API instances, expect significant savings. Simpler stacks see smaller absolute savings but benefit from reduced complexity.
We believe in honest engineering. FraiseQL isn't for everyone:
If these are deal-breakers, check out Hasura or PostGraphile.