Advanced Patterns
Advanced Patterns
Section titled “Advanced Patterns”Master advanced FraiseQL patterns for production applications.
Soft Delete Pattern
Section titled “Soft Delete Pattern”Implement undo/restore functionality without losing data.
Basic Soft Delete
Section titled “Basic Soft Delete”from datetime import datetimefrom fraiseql import FraiseQL
@fraiseql.typeclass Post: id: ID title: str content: str created_at: datetime deleted_at: datetime | None # NULL = not deleted
@fraiseql.querydef posts() -> list[Post]: """Get all non-deleted posts.""" # FraiseQL automatically filters WHERE deleted_at IS NULL pass
@fraiseql.querydef posts_including_deleted() -> list[Post]: """Get all posts including deleted ones (admin only).""" pass
@fraiseql.mutationdef delete_post(id: ID) -> Post: """Soft delete: set deleted_at timestamp.""" # UPDATE posts SET deleted_at = NOW() WHERE id = ? pass
@fraiseql.mutationdef restore_post(id: ID) -> Post: """Restore deleted post.""" # UPDATE posts SET deleted_at = NULL WHERE id = ? passDatabase Implementation
Section titled “Database Implementation”CREATE TABLE posts ( id BIGSERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, created_at TIMESTAMP DEFAULT NOW(), deleted_at TIMESTAMP -- NULL = not deleted);
-- Create index for faster queriesCREATE INDEX idx_posts_deleted_at ON posts(deleted_at)WHERE deleted_at IS NULL;
-- View for non-deleted postsCREATE VIEW v_posts_active ASSELECT * FROM posts WHERE deleted_at IS NULL;CREATE TABLE posts ( id BIGINT AUTO_INCREMENT PRIMARY KEY, title VARCHAR(255) NOT NULL, content LONGTEXT NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, deleted_at TIMESTAMP NULL);
-- Index for active postsCREATE INDEX idx_posts_deleted_at ON posts(deleted_at);
-- View for non-deleted postsCREATE VIEW v_posts_active ASSELECT * FROM posts WHERE deleted_at IS NULL;CREATE TABLE posts ( id INTEGER PRIMARY KEY AUTOINCREMENT, title TEXT NOT NULL, content TEXT NOT NULL, created_at DATETIME DEFAULT CURRENT_TIMESTAMP, deleted_at DATETIME);
-- Index for active postsCREATE INDEX idx_posts_deleted_at ON posts(deleted_at);
-- View for non-deleted postsCREATE VIEW v_posts_active ASSELECT * FROM posts WHERE deleted_at IS NULL;CREATE TABLE dbo.posts ( id BIGINT IDENTITY(1,1) PRIMARY KEY, title NVARCHAR(MAX) NOT NULL, content NVARCHAR(MAX) NOT NULL, created_at DATETIMEOFFSET DEFAULT SYSDATETIMEOFFSET(), deleted_at DATETIMEOFFSET NULL);
-- Index for active postsCREATE INDEX idx_posts_deleted_at ON dbo.posts(deleted_at)WHERE deleted_at IS NULL;
-- View for non-deleted postsCREATE VIEW dbo.v_posts_active ASSELECT * FROM dbo.posts WHERE deleted_at IS NULL;Cascading Soft Deletes
Section titled “Cascading Soft Deletes”Delete parent and children (soft):
@fraiseql.mutationdef delete_post_with_comments(post_id: ID) -> Post: """Delete post and all its comments.""" # In database with triggers: # 1. UPDATE posts SET deleted_at = NOW() WHERE id = ? # 2. UPDATE comments SET deleted_at = NOW() WHERE post_id = ? passDatabase trigger (PostgreSQL):
CREATE TRIGGER cascade_post_deleteAFTER UPDATE ON postsFOR EACH ROWWHEN (NEW.deleted_at IS NOT NULL AND OLD.deleted_at IS NULL)BEGIN UPDATE comments SET deleted_at = NOW() WHERE post_id = NEW.id AND deleted_at IS NULL;END;Audit Trail Pattern
Section titled “Audit Trail Pattern”Track all changes for compliance and debugging.
Simple Audit Log
Section titled “Simple Audit Log”from datetime import datetimefrom enum import Enum
class AuditAction(str, Enum): CREATE = "CREATE" UPDATE = "UPDATE" DELETE = "DELETE" RESTORE = "RESTORE"
@fraiseql.typeclass AuditLog: id: ID entity_type: str # "Post", "User", etc. entity_id: ID action: AuditAction old_values: dict # JSON of previous values new_values: dict # JSON of current values user_id: ID timestamp: datetime
@fraiseql.querydef audit_logs(entity_type: str, entity_id: ID) -> list[AuditLog]: """Get audit trail for entity.""" pass
@fraiseql.mutationdef update_post(id: ID, title: str, content: str) -> Post: """Update post with audit logging.""" # 1. Get old values # 2. Update post # 3. Log change: { entity_type: "Post", entity_id: id, action: "UPDATE", old_values: {...}, new_values: {...} } passDatabase Implementation
Section titled “Database Implementation”CREATE TABLE audit_logs ( id BIGSERIAL PRIMARY KEY, entity_type VARCHAR(50) NOT NULL, entity_id BIGINT NOT NULL, action VARCHAR(20) NOT NULL, old_values JSONB, new_values JSONB, user_id BIGINT NOT NULL, timestamp TIMESTAMP DEFAULT NOW());
CREATE INDEX idx_audit_logs_entity ON audit_logs(entity_type, entity_id);CREATE INDEX idx_audit_logs_user ON audit_logs(user_id);CREATE INDEX idx_audit_logs_timestamp ON audit_logs(timestamp DESC);
-- View for audit timelineCREATE VIEW v_audit_timeline ASSELECT id, entity_type, entity_id, action, old_values, new_values, user_id, timestampFROM audit_logsORDER BY timestamp DESC;CREATE TABLE audit_logs ( id BIGINT AUTO_INCREMENT PRIMARY KEY, entity_type VARCHAR(50) NOT NULL, entity_id BIGINT NOT NULL, action VARCHAR(20) NOT NULL, old_values JSON, new_values JSON, user_id BIGINT NOT NULL, timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
CREATE INDEX idx_audit_logs_entity ON audit_logs(entity_type, entity_id);CREATE INDEX idx_audit_logs_user ON audit_logs(user_id);CREATE INDEX idx_audit_logs_timestamp ON audit_logs(timestamp DESC);
-- View for audit timelineCREATE VIEW v_audit_timeline ASSELECT id, entity_type, entity_id, action, old_values, new_values, user_id, timestampFROM audit_logsORDER BY timestamp DESC;CREATE TABLE audit_logs ( id INTEGER PRIMARY KEY AUTOINCREMENT, entity_type VARCHAR(50) NOT NULL, entity_id INTEGER NOT NULL, action VARCHAR(20) NOT NULL, old_values TEXT, new_values TEXT, user_id INTEGER NOT NULL, timestamp DATETIME DEFAULT CURRENT_TIMESTAMP);
CREATE INDEX idx_audit_logs_entity ON audit_logs(entity_type, entity_id);CREATE INDEX idx_audit_logs_user ON audit_logs(user_id);CREATE INDEX idx_audit_logs_timestamp ON audit_logs(timestamp DESC);
-- View for audit timelineCREATE VIEW v_audit_timeline ASSELECT id, entity_type, entity_id, action, old_values, new_values, user_id, timestampFROM audit_logsORDER BY timestamp DESC;CREATE TABLE dbo.audit_logs ( id BIGINT IDENTITY(1,1) PRIMARY KEY, entity_type VARCHAR(50) NOT NULL, entity_id BIGINT NOT NULL, action VARCHAR(20) NOT NULL, old_values NVARCHAR(MAX), new_values NVARCHAR(MAX), user_id BIGINT NOT NULL, timestamp DATETIMEOFFSET DEFAULT SYSDATETIMEOFFSET());
CREATE INDEX idx_audit_logs_entity ON dbo.audit_logs(entity_type, entity_id);CREATE INDEX idx_audit_logs_user ON dbo.audit_logs(user_id);CREATE INDEX idx_audit_logs_timestamp ON dbo.audit_logs(timestamp DESC);
-- View for audit timelineCREATE VIEW dbo.v_audit_timeline ASSELECT id, entity_type, entity_id, action, old_values, new_values, user_id, timestampFROM dbo.audit_logsORDER BY timestamp DESC;Auto-Trigger Audit Logging
Section titled “Auto-Trigger Audit Logging”-- Function to log post changesCREATE OR REPLACE FUNCTION log_post_changes()RETURNS TRIGGER AS 'BEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ( ''Post'', NEW.id, CASE WHEN TG_OP = ''INSERT'' THEN ''CREATE'' WHEN TG_OP = ''UPDATE'' THEN ''UPDATE'' WHEN TG_OP = ''DELETE'' THEN ''DELETE'' END, CASE WHEN TG_OP = ''INSERT'' THEN NULL ELSE row_to_json(OLD) END, CASE WHEN TG_OP = ''DELETE'' THEN NULL ELSE row_to_json(NEW) END, COALESCE(current_setting(''app.user_id'')::BIGINT, NULL), NOW() ); RETURN NEW;END;' LANGUAGE plpgsql;
-- Trigger on posts tableCREATE TRIGGER posts_audit_triggerAFTER INSERT OR UPDATE OR DELETE ON postsFOR EACH ROWEXECUTE FUNCTION log_post_changes();-- Trigger for INSERTCREATE TRIGGER posts_audit_insertAFTER INSERT ON postsFOR EACH ROWBEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ('Post', NEW.id, 'CREATE', NULL, JSON_OBJECT( 'id', NEW.id, 'title', NEW.title, 'content', NEW.content, 'created_at', NEW.created_at, 'deleted_at', NEW.deleted_at ), @user_id, NOW());END;
-- Trigger for UPDATECREATE TRIGGER posts_audit_updateAFTER UPDATE ON postsFOR EACH ROWBEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ('Post', NEW.id, 'UPDATE', JSON_OBJECT( 'id', OLD.id, 'title', OLD.title, 'content', OLD.content, 'created_at', OLD.created_at, 'deleted_at', OLD.deleted_at ), JSON_OBJECT( 'id', NEW.id, 'title', NEW.title, 'content', NEW.content, 'created_at', NEW.created_at, 'deleted_at', NEW.deleted_at ), @user_id, NOW());END;
-- Trigger for DELETECREATE TRIGGER posts_audit_deleteAFTER DELETE ON postsFOR EACH ROWBEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ('Post', OLD.id, 'DELETE', JSON_OBJECT( 'id', OLD.id, 'title', OLD.title, 'content', OLD.content, 'created_at', OLD.created_at, 'deleted_at', OLD.deleted_at ), NULL, @user_id, NOW());END;-- Trigger for INSERTCREATE TRIGGER posts_audit_insertAFTER INSERT ON postsBEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ('Post', NEW.id, 'CREATE', NULL, json_object( 'id', NEW.id, 'title', NEW.title, 'content', NEW.content, 'created_at', NEW.created_at, 'deleted_at', NEW.deleted_at ), (SELECT user_id FROM user_context LIMIT 1), datetime('now'));END;
-- Trigger for UPDATECREATE TRIGGER posts_audit_updateAFTER UPDATE ON postsBEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ('Post', NEW.id, 'UPDATE', json_object( 'id', OLD.id, 'title', OLD.title, 'content', OLD.content, 'created_at', OLD.created_at, 'deleted_at', OLD.deleted_at ), json_object( 'id', NEW.id, 'title', NEW.title, 'content', NEW.content, 'created_at', NEW.created_at, 'deleted_at', NEW.deleted_at ), (SELECT user_id FROM user_context LIMIT 1), datetime('now'));END;
-- Trigger for DELETECREATE TRIGGER posts_audit_deleteAFTER DELETE ON postsBEGIN INSERT INTO audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) VALUES ('Post', OLD.id, 'DELETE', json_object( 'id', OLD.id, 'title', OLD.title, 'content', OLD.content, 'created_at', OLD.created_at, 'deleted_at', OLD.deleted_at ), NULL, (SELECT user_id FROM user_context LIMIT 1), datetime('now'));END;-- Trigger for all operationsCREATE TRIGGER posts_audit_triggerON dbo.postsAFTER INSERT, UPDATE, DELETEASBEGIN SET NOCOUNT ON;
INSERT INTO dbo.audit_logs (entity_type, entity_id, action, old_values, new_values, user_id, timestamp) SELECT 'Post', COALESCE(INSERTED.id, DELETED.id), CASE WHEN INSERTED.id IS NOT NULL AND DELETED.id IS NULL THEN 'CREATE' WHEN INSERTED.id IS NOT NULL AND DELETED.id IS NOT NULL THEN 'UPDATE' ELSE 'DELETE' END, CASE WHEN INSERTED.id IS NOT NULL AND DELETED.id IS NULL THEN NULL ELSE (SELECT * FROM DELETED AS d FOR JSON PATH, WITHOUT_ARRAY_WRAPPER) END, CASE WHEN DELETED.id IS NOT NULL AND INSERTED.id IS NULL THEN NULL ELSE (SELECT * FROM INSERTED AS i FOR JSON PATH, WITHOUT_ARRAY_WRAPPER) END, CAST(SESSION_CONTEXT(N'user_id') AS BIGINT), SYSDATETIMEOFFSET() FROM INSERTED FULL JOIN DELETED ON INSERTED.id = DELETED.id;END;Pass the user ID from the JWT into your mutation functions using inject=, then set the PostgreSQL session variable inside the SQL function:
@fraiseql.mutation( sql_source="fn_update_post", operation="UPDATE", inject={"user_id": "jwt:sub"} # JWT 'sub' is passed as SQL param)def update_post(id: ID, title: str, content: str) -> Post: """Update post. fn_update_post sets app.user_id for audit triggers.""" passVersioning Pattern
Section titled “Versioning Pattern”Track content versions and allow rollback.
Versioned Content
Section titled “Versioned Content”@fraiseql.typeclass PostVersion: id: ID post_id: ID version_number: int title: str content: str created_by: ID created_at: datetime
@fraiseql.typeclass Post: id: ID title: str content: str current_version: int versions: list[PostVersion] created_at: datetime updated_at: datetime
@fraiseql.querydef post_versions(post_id: ID) -> list[PostVersion]: """Get all versions of a post.""" pass
@fraiseql.mutationdef update_post(id: ID, title: str, content: str) -> Post: """Update post (creates new version).""" # 1. Get current version_number # 2. Create new version record # 3. Update post with new content pass
@fraiseql.mutationdef revert_to_version(post_id: ID, version_number: int) -> Post: """Revert to specific version.""" # 1. Get version content # 2. Create new version with old content # 3. Update post passDatabase Schema (PostgreSQL)
Section titled “Database Schema (PostgreSQL)”CREATE TABLE posts ( id BIGSERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, current_version INT DEFAULT 1, created_at TIMESTAMP DEFAULT NOW(), updated_at TIMESTAMP DEFAULT NOW());
CREATE TABLE post_versions ( id BIGSERIAL PRIMARY KEY, post_id BIGINT NOT NULL REFERENCES posts(id) ON DELETE CASCADE, version_number INT NOT NULL, title TEXT NOT NULL, content TEXT NOT NULL, created_by BIGINT NOT NULL, created_at TIMESTAMP DEFAULT NOW(), UNIQUE(post_id, version_number));
CREATE INDEX idx_post_versions_post_id ON post_versions(post_id);CREATE INDEX idx_post_versions_created_at ON post_versions(created_at DESC);Pagination at Scale
Section titled “Pagination at Scale”Efficient pagination for large datasets.
Cursor-Based Pagination (Recommended)
Section titled “Cursor-Based Pagination (Recommended)”@fraiseql.typeclass PageInfo: has_next_page: bool has_prev_page: bool start_cursor: str | None end_cursor: str | None
@fraiseql.typeclass PostEdge: cursor: str node: Post
@fraiseql.typeclass PostConnection: edges: list[PostEdge] pageInfo: PageInfo total_count: int
@fraiseql.querydef posts( first: int = 10, after: str | None = None) -> PostConnection: """Cursor-based pagination.""" # Cursor = base64(id=123) # Query: SELECT * FROM posts WHERE id > 123 LIMIT 11 # Check if 11 rows returned to know if has_next_page passEfficient Database Queries
Section titled “Efficient Database Queries”-- GOOD: Use keyset pagination (efficient)SELECT * FROM postsWHERE id > ? AND created_at > ?ORDER BY created_at DESCLIMIT 10;
-- BAD: Avoid OFFSET (scans all rows)SELECT * FROM postsORDER BY created_at DESCLIMIT 10 OFFSET 1000000; -- Scans 1,000,010 rows!Handling Cursor Encoding
Section titled “Handling Cursor Encoding”import base64
def encode_cursor(post_id: int, created_at: datetime) -> str: """Encode cursor for pagination.""" data = f"{post_id}:{created_at.isoformat()}" return base64.b64encode(data.encode()).decode()
def decode_cursor(cursor: str) -> tuple[int, datetime]: """Decode cursor.""" data = base64.b64decode(cursor.encode()).decode() post_id, created_at = data.split(":") return int(post_id), datetime.fromisoformat(created_at)RBAC (Role-Based Access Control)
Section titled “RBAC (Role-Based Access Control)”Fine-grained permission management.
Permission Model
Section titled “Permission Model”from enum import Enum
class Role(str, Enum): ADMIN = "admin" EDITOR = "editor" VIEWER = "viewer"
class Permission(str, Enum): READ_POST = "read:post" CREATE_POST = "create:post" UPDATE_POST = "update:post" DELETE_POST = "delete:post" MANAGE_USERS = "manage:users"Database-Backed RBAC
Section titled “Database-Backed RBAC”CREATE TABLE roles ( id BIGSERIAL PRIMARY KEY, name VARCHAR(50) NOT NULL UNIQUE, description TEXT);
CREATE TABLE permissions ( id BIGSERIAL PRIMARY KEY, name VARCHAR(100) NOT NULL UNIQUE, description TEXT);
CREATE TABLE role_permissions ( role_id BIGINT NOT NULL REFERENCES roles(id) ON DELETE CASCADE, permission_id BIGINT NOT NULL REFERENCES permissions(id) ON DELETE CASCADE, PRIMARY KEY (role_id, permission_id));
CREATE TABLE user_roles ( user_id BIGINT NOT NULL REFERENCES users(id) ON DELETE CASCADE, role_id BIGINT NOT NULL REFERENCES roles(id) ON DELETE CASCADE, PRIMARY KEY (user_id, role_id));
-- View: User permissionsCREATE VIEW v_user_permissions ASSELECT DISTINCT u.id AS user_id, p.name AS permissionFROM users uJOIN user_roles ur ON u.id = ur.user_idJOIN role_permissions rp ON ur.role_id = rp.role_idJOIN permissions p ON rp.permission_id = p.id;Authorization in FraiseQL
Section titled “Authorization in FraiseQL”@fraiseql.query(requires_scope="read:post")def posts() -> list[Post]: """User must have read:post permission.""" pass
@fraiseql.mutation(requires_scope="create:post")def create_post(title: str, content: str) -> Post: """User must have create:post permission.""" passScope enforcement is declared on each query/mutation via requires_scope=. FraiseQL validates the JWT scopes at the Rust runtime level before the query reaches the database — no Python permission check is needed.
Multi-Tenancy Patterns
Section titled “Multi-Tenancy Patterns”Isolate data between customers.
Row-Level Security (RLS) Approach
Section titled “Row-Level Security (RLS) Approach”@fraiseql.typeclass Post: id: ID tenant_id: ID title: str content: str # FraiseQL enforces: tenant_id = current_user.tenant_id
@fraiseql.querydef posts() -> list[Post]: """Get posts for current tenant only.""" # Automatically filters WHERE tenant_id = current_tenant_id passPostgreSQL RLS:
CREATE POLICY posts_tenant_isolation ON postsUSING (tenant_id = current_setting('app.tenant_id')::BIGINT)WITH CHECK (tenant_id = current_setting('app.tenant_id')::BIGINT);
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;Separate Database Approach
Section titled “Separate Database Approach”# Configure each tenant's database in fraiseql.toml.# Use inject= to route by tenant from JWT claims:
```toml[databases.tenant_primary]url = "${TENANT_PRIMARY_DB_URL}"
[databases.tenant_secondary]url = "${TENANT_SECONDARY_DB_URL}"@fraiseql.query(sql_source="v_post", inject={"tenant_id": "jwt:tenant_id"})def posts() -> list[Post]: """FraiseQL passes tenant_id from JWT to the view's RLS policy.""" pass## Nested Document Pattern
Denormalize hierarchical data for performance.
```python@fraiseql.typeclass Comment: id: ID text: str author: str created_at: datetime
@fraiseql.typeclass Post: id: ID title: str content: str comments: list[Comment] # Denormalized
@fraiseql.querydef post(id: ID) -> Post: """Get post with nested comments.""" # Single query with JOINs or array aggregation # SELECT p.*, array_agg(c.*) as comments FROM posts p ... passWhen to use:
- Comments on a post
- Replies to a comment
- Attachments on a document
- Small nested collections (< 100 items)
When NOT to use:
- Large collections (> 10k items)
- Frequently updated sub-items
- Need independent sorting/filtering
Performance Optimization
Section titled “Performance Optimization”Query Complexity Limits
Section titled “Query Complexity Limits”Prevent expensive queries:
@fraiseql.querydef users() -> list[User]: """Limit nested queries.""" # Complexity: 1 (user list) + 10 (posts per user) + 10 (comments per post) # = 1 + 10*10 + 10*10*10 = 1111 # Too high! Set limit. passBatch Resolving
Section titled “Batch Resolving”@fraiseql.typeclass User: id: ID name: str posts: list[Post] # Automatically batchedCaching Expensive Computations
Section titled “Caching Expensive Computations”@fraiseql.query@cached(ttl=3600)def user_stats(user_id: ID) -> UserStats: """Cache expensive aggregation.""" # SELECT COUNT(*) as posts, SUM(...) as likes, etc. # Executed once per hour max passError Handling Patterns
Section titled “Error Handling Patterns”Custom Error Types
Section titled “Custom Error Types”FraiseQL decorator function bodies are never executed — they are compile-time schema declarations only. Input validation and error handling belong in the PostgreSQL function:
@fraiseql.mutation(sql_source="fn_create_post", operation="CREATE")def create_post(title: str, content: str) -> Post: """Create a new post. Validation is enforced by fn_create_post.""" passCREATE OR REPLACE FUNCTION fn_create_post(p_title TEXT, p_content TEXT)RETURNS SETOF v_post LANGUAGE plpgsql AS $$BEGIN IF length(p_title) < 3 THEN RAISE EXCEPTION 'Title too short' USING ERRCODE = 'P0001', HINT = 'INVALID_INPUT'; END IF;
IF length(p_content) < 10 THEN RAISE EXCEPTION 'Content too short' USING ERRCODE = 'P0002', HINT = 'INVALID_INPUT'; END IF;
-- ... insert and returnEND;$$;Consistent Error Responses
Section titled “Consistent Error Responses”Error codes are set via the HINT clause of RAISE EXCEPTION. FraiseQL maps this to extensions.code in the GraphQL error response. See Error Handling for the full error code reference.
Testing Advanced Patterns
Section titled “Testing Advanced Patterns”import pytest
@pytest.fixturedef user(): return create_user(name="Test User")
@pytest.fixturedef post(user): return create_post(user_id=user.id, title="Test")
def test_soft_delete(post): """Test soft delete.""" delete_post(post.id) assert get_post(post.id) is None # Filtered out assert get_post(post.id, include_deleted=True) is not None
def test_audit_logging(post): """Test audit trail.""" update_post(post.id, title="Updated") logs = get_audit_logs("Post", post.id) assert len(logs) == 2 # CREATE + UPDATE assert logs[1].action == "UPDATE" assert logs[1].old_values["title"] == "Test" assert logs[1].new_values["title"] == "Updated"
def test_versioning(post): """Test post versioning.""" update_post(post.id, title="V2") update_post(post.id, title="V3") versions = get_post_versions(post.id) assert len(versions) == 3 assert versions[2].version_number == 3Summary Table
Section titled “Summary Table”| Pattern | Use Case | Complexity | Performance |
|---|---|---|---|
| Soft Delete | Undo/restore | Low | High |
| Audit Trail | Compliance | Medium | Medium |
| Versioning | Content history | Medium | Medium |
| Cursor Pagination | Large datasets | Low | High |
| RBAC | Fine-grained permissions | Medium | Medium |
| Multi-tenancy | Data isolation | High | High |
| Nested Documents | Small nested data | Low | High |
| Caching | Expensive queries | Low | High |
Choose patterns based on your requirements, not “because everyone does it.”
See Also
Section titled “See Also”Related Guides
Section titled “Related Guides”- Authentication & Authorization — User identity and permissions
- Error Handling — Robust error management
- Multi-Tenancy Guide — Complete multi-tenant implementation
- Performance Optimization — Query and system tuning
Real-World Examples
Section titled “Real-World Examples”- SaaS Blog Platform — Multi-tenancy with soft deletes
- Real-Time Collaboration — Versioning and edit history
- Mobile Analytics Backend — High-volume event ingestion
- Federation E-Commerce — Cross-database patterns
Reference Documentation
Section titled “Reference Documentation”- Security Best Practices — Secure implementations
- Caching Strategy — Cache patterns
- Audit Logging — Compliance and tracking
- Troubleshooting — Common issues and solutions
Learning Resources
Section titled “Learning Resources”- RBAC Implementation — Permission models
- Multi-Tenancy Deep Dive — Data isolation strategies
- Database-Specific Guides — SQL implementation details
Next Steps
Section titled “Next Steps”- Choose relevant patterns for your use case from the summary table
- Implement one pattern at a time - don’t mix too many patterns initially
- Test thoroughly - advanced patterns need comprehensive test coverage
- Monitor performance - ensure patterns don’t create unexpected overhead
- Document your patterns - help other developers understand your choices
- Share and iterate - get feedback from team, refine as needed