For AI Engineers

Framework optimized for LLM-assisted development

The Problem with LLM Code Generation

AI code assistants excel at patterns but struggle with complexity. Most frameworks are:

  • High token cost: Resolvers, ORMs, middleware = lots of scaffolding code to generate
  • Ambiguous patterns: "Which way should I write this resolver?" → confusion, hallucinations
  • Runtime surprises: "Does this query have an N+1 problem?" → models can't verify
  • Large API surface: More surface area = more ways to go wrong
  • Context bloat: To generate correctly, LLMs need extensive examples and rules

FraiseQL: Optimized for AI Code Generation

1. Crystal Clear Patterns

Only 4 patterns to memorize. LLMs produce consistent, correct code.

  • Tables: `tb_entity`
  • Views: `v_entity`
  • IDs: UUID (never string, never int)
  • Composition: JSONB with `jsonb_build_object()`

Impact: Models don't need 50 examples. 4 patterns = consistent generation.

2. Minimal Scaffolding

Compare code generation size:

Traditional GraphQL
# Generate resolvers + ORMs + types + validation
@resolver
async def user(info, id):
  user = db.query(...)
  return user

@resolver
async def user_posts(parent, info):
  posts = db.query(...)
  return posts

# + DataLoader setup
# + Cache invalidation
# + Error handling

~80 lines of generated code per type
FraiseQL
# SQL view (human-written once)
CREATE VIEW v_user_with_posts AS ...

# Type definition
@fraiseql.type
class User:
  id: ID
  name: str
  posts: list[Post]

# Query resolver
@fraiseql.query(sql_source="v_user_with_posts")
def user(id: ID) -> User:
  pass

~15 lines of generated code per type

Impact: 5x less code to generate = 5x fewer tokens = faster, cheaper generations.

3. Deterministic Verification

LLMs can verify correctness through compilation:

# LLM generates code
# System compiles with: fraiseql build

# If any patterns wrong:
ERROR: Table 'tb_user' has id field as STRING
Expected: UUID

# LLM sees error, fixes, regenerates
# ✓ Loop terminates quickly with correct code

Impact: No guessing. Compile-time feedback = faster iteration.

4. Predictable Performance

LLMs don't need to worry about N+1 queries:

  • ✅ One view = one query (always)
  • ✅ No hidden performance footguns
  • ✅ No need to explain DataLoader patterns
  • ✅ Composition happens at database level (reliable)

Impact: Generated code is production-ready without performance audits.

5. Small API Surface

FraiseQL handles only one thing: map views to types.

Everything else is standard tools (auth, caching, databases).

# What LLMs need to know:
SQL: Create views with JSONB composition
✓ Types: Define schema types matching views
✓ Queries: Declare sql_source="v_name"
✓ Mutations: Declare sql_source="fn_name", operation="CREATE|UPDATE|DELETE"

# What LLMs DON'T need to know:
✗ Resolver patterns (there are none)
✗ DataLoader configuration (not needed)
✗ N+1 prevention strategies (built-in)
✗ Cache invalidation (optional, standard Redis)
✗ Query batching (happens at DB level)

Impact: Smaller knowledge base = fewer tokens + fewer hallucinations.

Token Efficiency Analysis

Metric Traditional GraphQL FraiseQL Savings
System Prompt Tokens 2,000+ (patterns, examples, rules) 400 (4 patterns + examples) 80%
Per-Type Generation Tokens 600-800 150-200 75%
Context Window Usage ~30% (for medium API) ~5% (for same API) 83%
Regeneration Rate (errors) ~40% (hallucinations) ~5% (compile feedback) 87.5%
Total Cost per API $50-100 $5-10 80-90%

Estimates based on typical 50-type GraphQL API generated via Claude API or similar.

AI-Powered Workflow

1

Describe Your Data

Tell Claude your data model:

"I have users with posts. Each user has many posts.
Generate SQL views and Python types for a GraphQL API."
2

LLM Generates Code

Claude knows FraiseQL patterns. Generates:

  • SQL: CREATE TABLE, CREATE VIEW
  • Python: @fraiseql.type definitions
  • Resolvers: @fraiseql.query/@fraiseql.mutation

Cost: ~200 tokens

3

Compile & Verify

Run build pipeline:

fraiseql build

# Either:
✓ All patterns valid → code is ready
✗ Pattern mismatch → clear error message
4

LLM Fixes & Iterates

If error, show Claude the error:

"ERROR: id field must be UUID, not str"

Claude regenerates that section (50 tokens).

Repeat until compile succeeds.

5

Deploy with Confidence

Compiled code = production-ready.

No manual audits for N+1, security issues, or performance problems.

Comparison: AI-Friendly Frameworks

Aspect FraiseQL Traditional GraphQL Django/FastAPI
Pattern Clarity 4 patterns 20+ patterns 50+ patterns
System Prompt Tokens ~400 ~2,000 ~3,000
Code per Type (lines) 15 80 100
Error Rate ~5% ~40% ~60%
Self-Correcting ✅ Via compilation ❌ Manual review ❌ Manual testing
Cost per API $5-10 $50-100 $100-200

Use Cases for AI Developers

1. Rapid Prototyping

Build full APIs in minutes, not hours. LLMs generate working code on first try.

2. Multi-Service Architecture

Each microservice can be generated independently. Same patterns = consistent across services.

3. Data Model Experiments

Try new schema designs. LLM regenerates all SQL/types in seconds. No manual refactoring.

4. Type-Safe Codegen

Generated code passes TypeScript/Python type checks immediately. No post-generation fixes.

5. Pattern Learning

LLMs learn FraiseQL patterns from examples. After 2-3 APIs, generate perfect code consistently.

6. Automated Testing

LLMs can generate test cases for views and resolvers. Patterns are predictable = testable.

Integration with AI Code Assistants

Claude / ChatGPT

Provide FraiseQL patterns in system prompt. Models generate correct code consistently.

System Prompt:
"Use FraiseQL for database APIs.
Tables: tb_entity
Views: v_entity with JSONB composition
IDs: UUID type always
..."

GitHub Copilot

Trained on FraiseQL patterns. Autocomplete suggests correct SQL views and type definitions.

IDE Plugins

Validate FraiseQL patterns as you type. Error feedback in real-time.

CI/CD Pipelines

Build step validates all generated code. Fails fast if patterns violated.

Why FraiseQL is Built for AI

"Patterns + constraints = what AI does best"

AI excels at:

  • Following clear, consistent patterns
  • Generating boilerplate code
  • Iterating based on error feedback
  • Scaling solutions to multiple examples

FraiseQL provides exactly that:

  • ✅ 4 core patterns (clear, consistent)
  • ✅ Minimal scaffolding (less boilerplate to hallucinate)
  • ✅ Compile-time feedback (errors are clear)
  • ✅ Scalable to many APIs (patterns don't change)

Result: AI-generated code that's production-ready, not toy code.

Ready to Generate Better APIs?