The Economics of FraiseQL

Trade storage for compute. Run efficiently on VPS, bare metal, or modest cloud VMs.

A transparent look at costs, trade-offs, and architecture-based projections.

The Fundamental Trade-off

What You Pay More For

2-4Γ—

more storage for projection tables (typical schemas)

  • β€’ Simple entities: ~2Γ— storage
  • β€’ Moderate nesting: ~3Γ— storage
  • β€’ Typical SaaS: ~4Γ— storage
  • β€’ No backup neededβ€”projection tables can be fully rebuilt from base tables

What You Save

~4Γ—

more logical ops per vCPU (expected for many workloads)

  • β€’ One simple query per read (no JOINs)
  • β€’ No ORM, no Python in hot path
  • β€’ No Redis/cache layer needed
  • β€’ Reduced operational complexity

The math: SSD storage costs ~$0.10/GB/month. A mid-tier compute instance costs ~$70-150/month. Even 4Γ— storage growth (100GB β†’ 400GB = +$30/month) is offset by running smaller servers or fewer instances.

Monthly Cost Comparison

Scenario: 1M Requests/Day

100GB base data, typical SaaS API workload, 90% reads

Cost Component Traditional Stack FraiseQL Difference
Compute
API servers (AWS m5.large)
4 instances
$280/mo
1 instance
$70/mo
-$210
Database
PostgreSQL (AWS RDS db.r5.large)
1 instance
$175/mo
1 instance
$175/mo
$0
Storage
gp3 SSD @ $0.10/GB
100 GB
$10/mo
400 GB (4Γ—)
$40/mo
+$30
Cache Layer
Redis (AWS ElastiCache cache.r5.large)
1 instance
$110/mo
Not needed
$0/mo
-$110
TOTAL $575/mo $285/mo -$290/mo
(-50%)

Prices based on AWS us-east-1 on-demand pricing as of 2024. Reserved instances reduce costs further.

Architecture-Based Throughput Projections

These projections are derived from architectural reasoning, not benchmarks. They represent expected performance for many read-heavy workloads. Validate with your own benchmarks.

Multi-Scenario Throughput Comparison

Scenario QPS/vCPU vs FraiseQL Notes
Naive ORM ~3-5 ~100-200x fewer N+1 queries, no optimization
Basic Python ~25-35 ~15-25x fewer Single JOIN query, standard serialization
Optimized Stack ~200-300 ~2x fewer DataLoaders, Redis, orjson, DTOs
FraiseQL ~500-600 baseline Table views, Rust pipeline

Interpreting These Numbers

Against naive ORM code, FraiseQL provides massive gains (100x+). Against fully optimized stacks with Redis and DataLoaders, gains are more modest (2x)β€”but you eliminate Redis complexity and cache invalidation logic.

Why Cascade Adds Throughput

FraiseQL Cascade returns updated projections directly from mutations, eliminating the need for clients to refetch after writes. This reduces total round-trips and allows higher effective throughput for write-heavy operations.

VPS / Small Cloud VM

2-4 vCPU, $10-40/mo

~700-2,200 ops/s

Expected for FraiseQL + Cascade

Mid-tier Cloud

8 vCPU, $70-150/mo

~3,000-4,500 ops/s

Expected for FraiseQL + Cascade

Bare Metal / Large VM

16+ vCPU

~6,000+ ops/s

Expected for FraiseQL + Cascade

Visualizing the Economics

Monthly Cost vs Request Volume

$0 $500 $1K $1.5K $2K 100K 500K 1M 5M 10M Requests/Day Monthly Cost Traditional FraiseQL

The green shaded area represents cumulative savings. The gap widens as scale increases.

Cost Breakdown Comparison

Traditional Compute $280 Cache $110 FraiseQL Compute $70 Storage $50 $575/mo $295/mo Compute Cache Storage

Higher storage cost is far outweighed by compute and cache layer savings.

When Does FraiseQL Make Sense?

βœ…

Strong Fit

Read/write ratio >10:1

  • β€’ Public APIs
  • β€’ Content platforms
  • β€’ E-commerce catalogs
  • β€’ Dashboard backends
  • β€’ Mobile app backends

40-60%

typical savings

βš–οΈ

Evaluate Carefully

Read/write ratio 3:1 to 10:1

  • β€’ Internal tools
  • β€’ B2B applications
  • β€’ Moderate-traffic APIs
  • β€’ Mixed workloads
  • β€’ CRM systems

10-40%

potential savings

❌

Not Recommended

Read/write ratio <3:1

  • β€’ IoT data ingestion
  • β€’ Logging systems
  • β€’ Write-heavy analytics
  • β€’ Event streaming
  • β€’ Time-series data

May cost more

sync overhead exceeds benefits

Quick Assessment

Calculate Your Ratio

Your daily reads: R = ___
Your daily writes: W = ___
Your ratio: R Γ· W = ?

Rules of Thumb

  • β†’ Ratio > 10: Strong fit, expect 40-60% savings
  • β†’ Ratio 3-10: Run the numbers for your specific case
  • β†’ Ratio < 3: Traditional stack likely better
  • β†’ Deep nesting: Higher storage, but also higher read savings

Annual Impact at Scale

Small Startup

100K requests/day

$1,200

saved per year

Growing SaaS

1M requests/day

$3,400

saved per year

Mid-Market

10M requests/day

$24,000

saved per year

Enterprise

100M requests/day

$180K+

saved per year

Estimates based on AWS on-demand pricing, read-heavy workloads (90% reads), 5Γ— storage multiplier. Your results will vary based on workload characteristics, cloud provider, and commitment discounts.

Our Assumptions (Transparency)

What We Assume

  • β€’ 90% reads, 10% writes β€” typical for most APIs
  • β€’ 4Γ— storage multiplier β€” typical SaaS schema
  • β€’ ~4Γ— throughput advantage β€” based on architectural reasoning
  • β€’ Redis required β€” for traditional at scale
  • β€’ AWS on-demand pricing β€” no reserved instances

Variables That Change the Math

  • ⚠ Higher write ratio β€” reduces FraiseQL advantage
  • ⚠ Deeper nesting β€” increases storage cost
  • βœ“ Higher read ratio β€” increases FraiseQL advantage
  • βœ“ Reserved instances β€” both stacks save, ratio similar
  • βœ“ Simpler schemas β€” lower storage multiplier

These are architecture-based projections, not benchmarks. The estimates are derived from understanding how projection tables eliminate JOINs and how the Rust adapter avoids ORM overhead. Validate with your own benchmarks on your specific workload.

See the Savings for Yourself

The best proof is your own benchmark. Try FraiseQL on your workload and measure the difference.