Confiture Overview
All 4 Mediums and when to use each one.
The Migrate medium applies incremental schema changes to existing databases. When you can’t drop and rebuild (production, staging with real data), migrations let you evolve the schema while preserving data.
Each migration is a pair of SQL files — one to apply the change (up), one to reverse it (down):
# Generate a new migrationconfiture migrate generate add_bio_to_usersThis creates two files:
ALTER TABLE tb_user ADD COLUMN bio TEXT;ALTER TABLE tb_user DROP COLUMN bio;# Apply all pending migrationsconfiture migrate up --env production
# Rollback the last migrationconfiture migrate down --env production
# Check migration statusconfiture migrate status --env production
# Dry run — show what would be appliedconfiture migrate up --env production --dry-run
# Apply a specific number of migrationsconfiture migrate up --env production --steps 3confiture migrate status --env productionMigration Status Applied At001_add_bio_to_users ✓ applied 2026-01-15 10:30:00002_add_avatar_url ✓ applied 2026-01-20 14:22:00003_create_tb_comment ✗ pending —004_add_comment_views ✗ pending —Add --format json for machine-readable output:
confiture migrate status --env production --format json{ "tracking_table": "public.tb_confiture", "applied": ["001", "002"], "pending": ["003", "004"], "migrations": [ {"version": "001", "name": "add_bio_to_users", "status": "applied", "applied_at": "2026-01-15T10:30:00+00:00"}, {"version": "003", "name": "create_tb_comment", "status": "pending", "applied_at": null} ], "summary": {"applied": 2, "pending": 2, "total": 4}}migrate status exits with a semantic exit code for use in deployment scripts:
| Exit code | Meaning |
|---|---|
0 | All migrations applied — database is up to date |
1 | Pending migrations exist |
2 | Migrations table not initialised (migrate up needed) |
3 | Connection error — check config |
confiture migrate status -c prod.yamlcase $? in 0) echo "Up to date" ;; 1) confiture migrate up -c prod.yaml ;; 2) echo "Run migrate up to initialise tracking table" ;; 3) echo "Connection error — check your config" ;;esacALTER TABLE tb_user ADD COLUMN avatar_url TEXT;ALTER TABLE tb_user DROP COLUMN avatar_url;CREATE TABLE tb_comment ( pk_comment INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id UUID DEFAULT gen_random_uuid() UNIQUE NOT NULL, identifier TEXT UNIQUE NOT NULL, fk_user INTEGER NOT NULL REFERENCES tb_user(pk_user), fk_post INTEGER NOT NULL REFERENCES tb_post(pk_post), content TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW());DROP TABLE tb_comment;CREATE INDEX idx_tb_post_fk_user ON tb_post(fk_user);CREATE INDEX idx_tb_post_slug ON tb_post(slug);DROP INDEX idx_tb_post_fk_user;DROP INDEX idx_tb_post_slug;When you add a column to a table, you typically also update the corresponding view:
-- Add column to tableALTER TABLE tb_user ADD COLUMN avatar_url TEXT;
-- Recreate view to include new fieldCREATE OR REPLACE VIEW v_user ASSELECT u.id, jsonb_build_object( 'id', u.id::text, 'name', u.name, 'email', u.email, 'avatar_url', u.avatar_url -- New field ) AS dataFROM tb_user u;-- Recreate view without new fieldCREATE OR REPLACE VIEW v_user ASSELECT u.id, jsonb_build_object( 'id', u.id::text, 'name', u.name, 'email', u.email ) AS dataFROM tb_user u;
-- Drop columnALTER TABLE tb_user DROP COLUMN avatar_url;For migrations that transform existing data:
-- Add columnALTER TABLE tb_user ADD COLUMN display_name TEXT;
-- Populate from existing dataUPDATE tb_user SET display_name = name WHERE display_name IS NULL;
-- Make non-nullable after populationALTER TABLE tb_user ALTER COLUMN display_name SET NOT NULL;Migrations are auto-numbered with a 3-digit prefix:
db/migrations/├── 001_initial_schema.up.sql├── 001_initial_schema.down.sql├── 002_add_bio_to_users.up.sql├── 002_add_bio_to_users.down.sql├── 003_create_tb_comment.up.sql└── 003_create_tb_comment.down.sqlConfiture auto-increments the version number when you run confiture migrate generate.
Confiture can be used as a Python library — no CLI required:
from confiture import Migrator
with Migrator.from_config("db/environments/prod.yaml") as m: result = m.status() print(f"{result.summary['pending']} pending migrations") if result.has_pending: up_result = m.up() print(f"Applied {len(up_result.applied)} migrations")Key classes and methods:
| Symbol | Description |
|---|---|
Migrator.from_config(config) | Accepts an Environment, Path, or str path to a config file |
MigratorSession | Context manager — connection is always closed on exit |
.status() → StatusResult | Check migration state |
.up(**kwargs) → MigrateUpResult | Apply pending migrations |
.down(steps, dry_run) → MigrateDownResult | Roll back migrations |
.reinit(through, dry_run) → MigrateReinitResult | Re-initialise tracking |
migrate up JSON output (v0.6.2+):
{ "applied": ["003", "004"], "skipped": [], "errors": [], "total_duration_ms": 412}Note: the keys migrations_applied, total_execution_time_ms, and execution_time_ms (per migration) were renamed to applied, total_duration_ms, and duration_ms in v0.6.2. Update any scripts that parse this output.
In practice, you maintain both DDL files and migrations:
db/schema/) — the current state of your schema (used by confiture build)db/migrations/) — the changes applied to existing databases (used by confiture migrate)Confiture Overview
All 4 Mediums and when to use each one.
Build from DDL
For fresh databases — faster than replaying migrations.
Schema-to-Schema
For zero-downtime refactoring of large production tables.
Production Data Sync
Copy production data locally with built-in anonymization.