Incremental View Maintenance
What goes wrong with a naive approach
Derived state in a TUI — e.g., “the list of files matching the current filter, sorted by mtime, windowed to the visible rows” — comes from composing several operations on a base data set. The naive implementation recomputes everything from scratch on any change. For 10 files that’s fine. For 10,000 files it locks the frame.
The textbook alternative is to hand-wire caches for each derived view. Every widget that depends on filter state checks a dirty flag and rebuilds from the previous state. This works at the small scale but has two disastrous failure modes:
- Stale caches. Forgot to invalidate one? Garbage data, compiler can’t catch it.
- Cascading invalidations. A base edit touches 20 derived views via unknown paths; the hand-wired cascade is wrong.
Incremental View Maintenance (IVM, from database theory) is the principled answer: model derived state as a DAG of operators over signed delta tuples, propagate deltas along the DAG, and update materialised views in-place. When a delta gets bigger than some fraction of the view (50% is the default), fall back to a full recompute — you’re no longer saving time.
Mental model
- Every piece of state is a multiset of tuples.
- Changes are signed deltas:
(key, weight, time)with positive weight for insert and negative for delete. - Operators (filter, map, join, aggregate) have incremental versions: given a base state and a delta, produce the delta to apply to the output.
- Operators compose into a DAG; deltas flow along edges.
- Each view remembers its current materialised state and applies incoming deltas in place.
IVM is not just caching with better manners. The operators are typed and composable: a filter followed by a map followed by an aggregate is itself an operator whose delta rule is derived from its parts. You wire the DAG once and the runtime handles propagation.
The math
Signed deltas
A state is a multiset of tuples. A delta is a signed multiset:
Operator delta rules
- Filter : . Just filter the delta.
- Map : .
- Aggregate (e.g.,
sumover a group): restricted to the affected groups. - Join: . The delta rule for joins is the famous “bilinear” form.
Full-recompute fallback
Let be the current view size. If , incremental propagation costs more than recompute — fall back:
The runtime records the fallback reason so you can see when this happens. A persistent fallback signals an over-edited base or a mis-tuned threshold.
Topological order
Deltas propagate along the DAG in topological order. Each operator is visited at most once per tick; cycles are forbidden at type level (the DAG must be acyclic by construction).
Worked example — filter + sort + window
base: files[] (5000 entries)
│
▼
σ mtime > t0 → filter (1800 entries)
│
▼
sort by mtime → sorted (1800 entries)
│
▼
window [0..40] → visible (40 entries)A rename of one file:
Δ base = { -file_old, +file_new }Δ σ = { same two tuples if both match predicate, else subset }Δ sort = Δ σ re-keyed by the orderingΔ window = only if ordering shifted something out of [0..40]
Net work: for the sort update, for the window check. Full recompute would have been — two orders of magnitude more for 5000 files.
Rust interface
use ftui_runtime::ivm::{IvmGraph, Op, Delta};
let mut g = IvmGraph::new();
let base = g.add_source("files");
let filt = g.add_filter(base, |f: &File| f.mtime > t0);
let sort = g.add_sort(filt, |a, b| a.mtime.cmp(&b.mtime));
let win = g.add_window(sort, 0, 40);
// Apply a base delta; propagate through the DAG:
g.push(base, Delta::insert(file_new));
g.push(base, Delta::remove(file_old));
let out: &[File] = g.view(win);
// Fallback threshold:
g.set_fallback_threshold(0.5); // 50% ruleOperators implement a trait so you can add custom incremental logic; the runtime takes care of propagation order and fallback.
How to debug
Every propagation tick emits an ivm_propagate line:
{"schema":"ivm_propagate","views_processed":6,
"views_recomputed":0,"total_delta_size":12,
"duration_us":145,"fallback_reason":null}Fallback fires recorded explicitly:
{"schema":"ivm_propagate","views_processed":6,
"views_recomputed":2,"total_delta_size":4800,
"duration_us":920,"fallback_reason":"delta_over_50pct"}FTUI_EVIDENCE_SINK=/tmp/ftui.jsonl cargo run -p ftui-demo-showcase
# How often are we falling back?
jq -c 'select(.schema=="ivm_propagate" and .fallback_reason != null)
| .fallback_reason' /tmp/ftui.jsonl | sort | uniq -cPitfalls
Join deltas are expensive. The bilinear rule means joins touch both sides on every update. If you have a fan-in join with millions of rows, the incremental cost is still . Prefer semi-joins or indexed joins when possible.
The 50% threshold is a proxy, not a theorem. If your
operators are cheap, incremental propagation remains faster even
at 80% delta ratio; if they are expensive, the break-even point
may be 20%. Profile and tune set_fallback_threshold on real
workloads.
Cross-references
/runtime/overview— where the IVM graph sits relative to the Elm update loop.- Rough-path signatures — a complementary “incremental summary” machinery.
/runtime/lenses— the state-access layer that many IVM sources attach to.