🍓 Python GraphQL Framework

The Explicit GraphQL Framework
for the LLM Era

Simple patterns. High performance. PostgreSQL at the center.

No N+1 queries. No ORM. Rust-fast responses. Patterns you—and your AI assistant—actually understand.

PyPI version Status GitHub stars Downloads
Requires: Python 3.13+ · PostgreSQL 13+
GraphQL Query
{
  users {
    id
    name
    posts {
      title
      comments {
        text
      }
    }
  }
}
One PostgreSQL Query
SELECT data FROM tv_user
-- Pre-computed JSONB with nested posts
-- 0.5-1ms response time

Why FraiseQL?

⚡ Rust-Accelerated Pipeline

Rust performs field selection, snake_case → camelCase transformation, and streams JSON to clients. No ORM, no Python in the read path, no object hydration.

Learn more →

🎯 Zero N+1 by Design

PostgreSQL composes nested data once in JSONB. Views contain all relations. One query regardless of GraphQL complexity. DataLoader is unnecessary.

Learn more →

🤖 AI-Native Development

Mutations are self-contained PostgreSQL functions — LLMs see validation, business logic, and sync in one place. No resolver chains to trace. tb_entity_change_log captures every side effect explicitly.

Learn more →

🔒 Security by Architecture

JSONB views define exactly what's exposed. Impossible to accidentally leak sensitive fields. No ORM over-fetching. What's not in the view cannot be queried.

Learn more →

💰 Simplified Infrastructure

Table views eliminate Redis. Cascade eliminates refetch queries. Built-in audit trails (tb_entity_change_log). Fewer services, simpler ops.

Learn more →

🔍 RAG & Vector Search

Native pgvector integration with 9+ distance operators. LangChain and LlamaIndex support built-in. Build semantic search and RAG systems with GraphQL.

Learn more →

What Experts Say

"That's a clever concept — let the database do the hard stuff and the API layer can be very lightweight and carefree. And the SQL is straightforward. Indeed a clever and robust concept!"
"I've reviewed your Rust and Python code, and it's clear you're operating at a very high level. Great craftsmanship and attention to detail."

Thomas Zeutschler

Principal Business Engineer & Co-Founder at 45level
Fellow & Senior Analyst at BARC

The Trade-Off We Made

Storage Over Compute

Projection tables pre-compute and store denormalized JSONB data. This costs 2-4× storage for typical schemas. Projection tables can be fully rebuilt from base tables—they don't need separate backups.

In exchange, reads become a single SELECT data FROM tv_{entity} query with no JOINs on the hot path. Compute savings come from one simple query per read, no ORM, no Python.

Storage is cheap; CPU cycles on every read are not.

When NOT to Use FraiseQL

  • Read-light, write-heavy — table view sync adds overhead if you rarely query
  • Massive data, few users — if storage cost dominates your bill, reconsider
  • Simple CRUD — if your queries are already fast, you don't need this

✓ Best for: Read-heavy APIs, real-time dashboards, multi-tenant SaaS

Quick Start

1. Install
pip install fraiseql
2. Define Schema
import fraiseql

@fraiseql.type
class User:
    id: int
    name: str

@fraiseql.query
async def users(info) -> list[User]:
    return await info.context.repo.fetch()
3. Run
from fraiseql.fastapi import FraiseQLConfig, create_fraiseql_app

config = FraiseQLConfig(database_url="postgresql://localhost/db")
app = create_fraiseql_app(types=[User, Post], config=config)

Honest Performance Comparison

What Are We Comparing Against?

Naive ORM

N+1 queries, full object hydration, no caching. The "it works" first pass.

Basic Python

Single JOINed query, json.dumps serialization. Competent code, no optimization.

Optimized Stack

DataLoaders, Redis, Pydantic DTOs, orjson. Production-tuned by senior devs.

FraiseQL

Table views, Rust pipeline, single query per read. Database-first architecture.

Scenario Response Time QPS/vCPU vs FraiseQL
Naive ORM
N+1 queries, no optimization
~300ms ~3 100-300x slower
Basic Python
JOINs, standard serialization
~30ms ~30 20-60x slower
Optimized Stack
DataLoaders, Redis, DTOs
~3ms (cached) ~250 2-3x slower
FraiseQL
Table views, Rust pipeline
~0.5-1ms ~500-600 baseline

Performance varies by baseline. Against naive ORM patterns (N+1 queries), expect 100-300x improvement. Against basic Python with JOINs, expect 20-60x improvement. Against fully optimized stacks with Redis and DataLoaders, expect 2-3x improvement—but with simpler architecture.

Honest About Writes

FraiseQL's advantage is primarily in reads. Write performance (~3.5-5.5ms) is similar to optimized stacks due to table view sync overhead. For write-heavy workloads (>50% writes), evaluate carefully.

Read performance based on nested query (1 user + 10 posts + 50 comments). Benchmarks run on a 10-year-old dual-core laptop—expect higher throughput on modern hardware.

Key insight: The 2-3x advantage vs optimized stacks comes with simpler ops—no Redis, no cache invalidation logic, no DataLoader boilerplate.

Reproduce these benchmarks →

💰 Eliminate Redis, Simplify Ops

Replace caching infrastructure with pre-computed table views. Fewer moving parts, lower costs.

Traditional Stack FraiseQL Stack Annual Savings
PostgreSQL: $50/mo PostgreSQL: $50-80/mo (more storage) -$360/yr
Redis Cloud: $50-500/mo ✅ Not needed $600-6,000/yr
Extra compute (4 instances) ✅ 1-2 instances $1,200-2,500/yr
Total: $350-1,200/mo Total: $120-250/mo $2,800-11,400/yr

Savings depend on your current optimization level. If you're already running Redis and multiple API instances, expect significant savings. Simpler stacks see smaller absolute savings but benefit from reduced complexity.

When NOT to Use FraiseQL

We believe in honest engineering. FraiseQL isn't for everyone:

  • Multiple database support — PostgreSQL 13+ only, by design
  • Existing ORM patterns — Large codebases with different patterns
  • GraphQL-layer business logic — Use PostgreSQL Functions instead
  • Simple CRUD-only APIs — REST might be simpler for basic needs

If these are deal-breakers, check out Hasura or PostGraphile.