Skip to content

Your First API

Build a complete blog API with users, posts, and comments. This guide shows the full workflow: write tables, write views, build the database with 🍯 Confiture, compile the GraphQL mapping, and serve.

A GraphQL API with:

  • Users with profiles
  • Blog posts with authors
  • Comments on posts
  • Queries and mutations

This guide requires a running database.

FraiseQL supports PostgreSQL, MySQL, SQLite, and SQL Server.
For this tutorial, we recommend PostgreSQL for the best JSON support and the smoothest experience.

Option A — Docker (Fastest Setup)

Terminal window
docker run --name blog-postgres \
-e POSTGRES_USER=user \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=db_blog \
-p 5432:5432 \
-d postgres:16

Verify connection:

Terminal window
psql postgresql://user:password@localhost:5432/db_blog

Enable UUID support:

CREATE EXTENSION IF NOT EXISTS pgcrypto;

Connection String

[database]
type = "postgresql"
url = "postgresql://user:password@localhost:5432/db_blog"

Documentation:
PostgreSQL Docker image
pgcrypto extension

Once your database is running and configured, continue to the next step.

Terminal window
mkdir blog-api && cd blog-api
fraiseql init .

This creates the project structure including the db/schema/ directory where your SQL lives.

Create your tables in db/schema/01_write/. The schema uses the Trinity pattern: a surrogate key (pk_*), a UUID/external identifier (id), and a text identifier. This pattern works across all databases:

db/schema/01_write/tb_user.sql
CREATE TABLE tb_user (
/*keys*/
pk_user BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
id UUID NOT NULL UNIQUE DEFAULT gen_random_uuid(),
identifier TEXT UNIQUE NOT NULL,
/*business fields*/
username VARCHAR(255) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL UNIQUE,
bio TEXT,
/*audit fields*/
created_at TIMESTAMPTZ DEFAULT NOW()
);
db/schema/01_write/tb_post.sql
CREATE TABLE tb_post (
/*keys*/
pk_post BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
id UUID NOT NULL UNIQUE DEFAULT gen_random_uuid(),
identifier TEXT UNIQUE NOT NULL,
fk_user BIGINT NOT NULL REFERENCES tb_user(pk_user),
/*business fields*/
title TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
content TEXT NOT NULL,
is_published BOOLEAN DEFAULT false,
/*audit fields*/
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
/*indexes*/
CREATE INDEX ix_tb_post_fk_user ON tb_post(fk_user);
db/schema/01_write/tb_comment.sql
CREATE TABLE tb_comment (
/*keys*/
pk_comment BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
id UUID NOT NULL UNIQUE DEFAULT gen_random_uuid(),
identifier TEXT UNIQUE NOT NULL,
fk_user BIGINT NOT NULL REFERENCES tb_user(pk_user),
fk_post BIGINT NOT NULL REFERENCES tb_post(pk_post),
/*business fields*/
content TEXT NOT NULL,
/*audit fields*/
created_at TIMESTAMPTZ DEFAULT NOW()
);
/*indexes*/
CREATE INDEX ix_tb_comment_fk_user ON tb_comment(fk_user);
CREATE INDEX ix_tb_comment_fk_post ON tb_comment(fk_post);

Note on updated_at: Only MySQL auto-updates this column via ON UPDATE CURRENT_TIMESTAMP. On PostgreSQL, SQLite, and SQL Server, your database functions (called via mutations) must set updated_at explicitly — the column records what your function passes, not a database-managed timestamp.

Create views in db/schema/02_read/ following the .data JSONB pattern:

db/schema/02_read/v_user.sql
CREATE VIEW v_user AS
SELECT
u.id,
jsonb_build_object(
/*keys*/
'id', u.id::text,
/*scalar fields*/
'username', u.username,
'email', u.email,
'bio', u.bio,
'created_at', u.created_at
) AS data
FROM tb_user u;
db/schema/02_read/v_comment.sql
CREATE VIEW v_comment AS
SELECT
c.id,
c.fk_post,
jsonb_build_object(
/*keys*/
'id', c.id::text,
/*scalar fields*/
'content', c.content,
'created_at', c.created_at,
/*relationships*/
'author', vu.data
) AS data
FROM tb_comment c
JOIN tb_user u ON u.pk_user = c.fk_user
JOIN v_user vu ON vu.id = u.id;
db/schema/02_read/v_post.sql
CREATE VIEW v_post AS
SELECT
p.id,
jsonb_build_object(
/*keys*/
'id', p.id::text,
/*scalar fields*/
'title', p.title,
'slug', p.slug,
'content', p.content,
'is_published', p.is_published,
/*relationships*/
'author', vu.data,
'comments', COALESCE(
jsonb_agg(vc.data) FILTER (WHERE vc.id IS NOT NULL),
'[]'::jsonb
)
) AS data
FROM tb_post p
JOIN tb_user u ON u.pk_user = p.fk_user
JOIN v_user vu ON vu.id = u.id
LEFT JOIN v_comment vc ON vc.fk_post = p.pk_post
GROUP BY p.pk_post, vu.data;

Key insight: Views compose other views. v_post uses vu.data and jsonb_agg(vc.data) — no duplication of field definitions. Add a field to v_user once, and it appears everywhere v_user.data is embedded.

Replace schema.py with:

schema.py
import fraiseql
from fraiseql.scalars import ID, Email, Slug, DateTime
@fraiseql.type
class User:
"""A blog user."""
id: ID
username: str
email: Email
bio: str | None
created_at: DateTime
posts: list['Post']
@fraiseql.type
class Post:
"""A blog post."""
id: ID
title: str
slug: Slug
content: str
published: bool
created_at: DateTime
updated_at: DateTime
author: User
comments: list['Comment']
@fraiseql.type
class Comment:
"""A comment on a post."""
id: ID
content: str
created_at: DateTime
author: User
post: Post
# Input types for mutations
@fraiseql.input
class CreateUserInput:
username: str
email: Email
bio: str | None = None
@fraiseql.input
class CreatePostInput:
title: str
content: str
author_id: ID
published: bool = False
@fraiseql.input
class CreateCommentInput:
content: str
post_id: ID
author_id: ID
fraiseql.toml
[project]
name = "blog-api"
version = "1.0.0"
[database]
url = "postgresql://user:password@localhost:5432/db_blog"
[server]
port = 8080
host = "127.0.0.1"
[server.cors]
origins = ["http://localhost:3000"]

Use 🍯 Confiture to create the database from your DDL files:

Terminal window
# Build fresh database from all SQL files
confiture build --env local
✓ Schema built successfully in 0.4s
→ Created 3 tables, 3 views

This concatenates and executes all your SQL files in order: tables first, then views.

Terminal window
# Compile the GraphQL mapping
fraiseql compile
# Start the server
fraiseql run

Your API is now running at http://localhost:8080/graphql.

Open the GraphQL Playground at http://localhost:8080/graphql and try these queries:

mutation {
createUser(input: {
username: "alice"
email: "alice@example.com"
bio: "Software developer"
}) {
id
username
}
}

Response:

{
"data": {
"createUser": {
"id": "usr_01HZ3K",
"username": "alice"
}
}
}
mutation {
createPost(input: {
title: "Hello World"
content: "This is my first post!"
authorId: "usr_01HZ3K"
published: true
}) {
id
title
slug
}
}

Response:

{
"data": {
"createPost": {
"id": "pst_01HZ4M",
"title": "Hello World",
"slug": "hello-world"
}
}
}
query {
posts {
id
title
content
author {
username
}
comments {
content
author {
username
}
}
}
}

Response:

{
"data": {
"posts": [
{
"id": "pst_01HZ4M",
"title": "Hello World",
"content": "This is my first post!",
"author": {
"username": "alice"
},
"comments": []
}
]
}
}
query {
user(id: "usr_01HZ3K") {
username
email
posts {
title
published
}
}
}

Response:

{
"data": {
"user": {
"username": "alice",
"email": "alice@example.com",
"posts": [
{
"title": "Hello World",
"published": true
}
]
}
}
}
  1. You wrote the SQL — Tables (tb_*) for writes, views (v_*) for reads
  2. 🍯 Confiture built the database — DDL files concatenated and executed in under 1s
  3. FraiseQL compiled the mapping — GraphQL types mapped to SQL views
  4. Queries resolve to SQLposts { author { ... } } becomes SELECT data FROM v_post

The nested GraphQL response comes from JSON composition in your views. v_post embeds v_user.data for the author and aggregates v_comment.data for comments. One query per GraphQL request, no application-level N+1.

Sometimes you need a query that cannot be expressed automatically. The v_user view exposes scalar fields only — it does not embed posts. v_post embeds the author (v_user.data), but the reverse — a user with all their posts — requires explicit SQL composition. This is a classic reverse-relationship that a custom view handles cleanly.

Define an AuthorProfile type and map it to a dedicated view:

schema.py
@fraiseql.type
class AuthorProfile:
id: ID
username: str
email: Email
bio: str | None
created_at: DateTime
posts: list[Post]
@fraiseql.query(sql_source="v_author_profile")
def author_profile(id: ID) -> AuthorProfile | None:
"""Get an author with all their posts."""
pass

Then write the view that shapes the data at the SQL level:

db/schema/02_read/v_author_profile.sql
CREATE VIEW v_author_profile AS
SELECT
u.id,
jsonb_build_object(
/*keys*/
'id', u.id::text,
/*scalar fields*/
'username', u.username,
'email', u.email,
'bio', u.bio,
'created_at', u.created_at,
/*relationships*/
'posts', COALESCE(
jsonb_agg(vp.data) FILTER (WHERE vp.id IS NOT NULL),
'[]'::jsonb
)
) AS data
FROM tb_user u
LEFT JOIN tb_post tp ON tp.fk_user = u.pk_user
LEFT JOIN v_post vp ON vp.id = tp.id
GROUP BY u.pk_user;

This is the power of the database-first approach: shape data for the UI entirely at the SQL level. v_author_profile composes v_post the same way v_post composes v_user — no custom resolvers, no N+1, one query.

For business logic and side effects, use observers to trigger actions automatically.