Confiture Overview
All 4 Mediums and when to use each one.
The Schema-to-Schema medium handles major schema refactoring on production databases with zero downtime. It runs old and new schemas side-by-side using PostgreSQL Foreign Data Wrapper (FDW), migrates data in the background, and cuts over when ready.
Incremental migrations work for simple changes (add column, create index). But some changes are too disruptive:
These operations lock tables, block writes, and can cause minutes of downtime on large tables. Schema-to-Schema avoids this entirely.
The process runs in 5 steps:
Setup — Create a new database with the new schema alongside the existing one:
confiture migrate schema-to-schema setup --source prod --target prod_newThis creates prod_new, applies your updated DDL files to it, and sets up a PostgreSQL FDW connection from prod to prod_new. Your application continues running against prod throughout.
Migrate Data — Copy data from the old schema to the new one in the background:
confiture migrate schema-to-schema migrate --target prod_newConfiture automatically selects the best strategy per table:
| Strategy | Throughput | Best For |
|---|---|---|
| FDW | ~500K rows/sec | Tables under 10M rows, complex SQL transformations |
| COPY | ~6M rows/sec | Tables over 10M rows, bulk data |
Your application stays on the old schema while this runs.
Verify — Check data integrity before cutting over:
confiture migrate schema-to-schema verify --target prod_new✓ Row counts match → tb_user: 12,450 (source) = 12,450 (target) → tb_post: 45,230 (source) = 45,230 (target)✓ Checksum verification passed✓ Foreign key integrity verifiedCutover — Update your application to point to the new database:
# Update environment variable or configDATABASE_URL=postgresql://prod_new_host:5432/prod_new
# Restart application (0-5s downtime)The cutover is a config change — the only downtime is the application restart.
Cleanup — After monitoring the new database for a period:
confiture migrate schema-to-schema cleanup --target prod_newThis removes the FDW connection and (optionally) the old database.
When renaming or transforming columns, provide a mapping file:
tb_user: # Simple rename username: login_name
# Type transformation metadata: source_column: settings_json transform: "settings_json::jsonb"
# Computed column display_name: transform: "COALESCE(display_name, name)"confiture migrate schema-to-schema migrate \ --target prod_new \ --mapping column_mapping.yamlRenaming tb_user.username to tb_user.login_name on a table with 50M rows:
# 1. Update DDL files with new column name# db/schema/01_write/tb_user.sql now uses login_name
# 2. Setup new databaseconfiture migrate schema-to-schema setup --source prod --target prod_new
# 3. Migrate with column mappingconfiture migrate schema-to-schema migrate \ --target prod_new \ --mapping column_mapping.yaml
# 4. Verifyconfiture migrate schema-to-schema verify --target prod_new
# 5. Update app config, restart# 6. Monitor, then cleanupconfiture migrate schema-to-schema cleanup --target prod_newConfiture Overview
All 4 Mediums and when to use each one.
Incremental Migrations
For simpler changes that don’t require zero-downtime migration.
Build from DDL
For fresh databases — fast setup from DDL files.
Production Data Sync
Copy production data locally with built-in anonymization.