Skip to Content
ftui-widgetsCommand palette

Command palette

The command palette is FrankenTUI’s flagship search widget, and it is one of the few places in the codebase where the intelligence layer is visible at the widget surface. Instead of a hand-tuned fuzzy-match score, the palette ranks candidates by posterior probability computed from a Bayesian evidence ledger — each piece of evidence (match type, word boundary, position, gap penalty, tag match, title length) contributes a Bayes factor, and the product plus a prior yields the final score.

Because the scoring is explicit, every ranked result carries its ledger. You can dump the ledger to JSONL, audit it in a debugger, and reason about why one candidate outranked another — no opaque heuristic black box.

The maths behind this page is covered in detail on bayesian-inference / command-palette-ledger (the intelligence-layer companion). This page focuses on the widget.

What the palette is

CommandPalette is a modal-style overlay that:

  1. Takes a query string from a text input.
  2. Scores every candidate command against the query.
  3. Ranks candidates by posterior probability.
  4. Computes a rank-stability indicator (via conformal prediction).
  5. Renders the top N results.

Sources:

The probability model

The scorer treats “is this candidate relevant?” as a Bayesian hypothesis:

P(H1e)P(H0e)posterior odds  =  P(H1)P(H0)prior odds  ×  iP(eiH1)P(eiH0)Bayes factor i\underbrace{\frac{P(H_1 \mid e)}{P(H_0 \mid e)}}_{\text{posterior odds}} \;=\; \underbrace{\frac{P(H_1)}{P(H_0)}}_{\text{prior odds}} \;\times\; \prod_{i} \underbrace{\frac{P(e_i \mid H_1)}{P(e_i \mid H_0)}}_{\text{Bayes factor } i}

Posterior odds convert to a probability in [0,1][0, 1] via

P=odds1+oddsP = \frac{\text{odds}}{1 + \text{odds}}

That probability is the palette score. Results are sorted descending.

Prior odds from match type

Every candidate is first classified by MatchType (scorer.rs:55). The match type sets the prior odds:

MatchTypePrior oddsPP (approx)When it fires
Exact99:10.99Full string match
Prefix9:10.90Query is a prefix of the title
WordStart4:10.80Query matches word-start boundaries
Substring2:10.67Contiguous substring anywhere
Fuzzy1:30.25Non-contiguous character match
NoMatch00.00Rejected

The MatchType::prior_odds() table lives at scorer.rs:77–85.

These priors are not arbitrary — they encode the empirical observation that exact matches are almost always what the user wants, and fuzzy matches need independent corroborating evidence before they can outrank a substring.

Evidence factors

After the prior is set, the ledger accumulates per-evidence Bayes factors. Each factor multiplies the odds. Representative factors (scorer.rs:118–144):

EvidenceTypical Bayes factorIntuition
WordBoundary≈ 2.0The match aligns with a word start
Position1/pos\propto 1/\text{pos}Earlier matches are more informative
GapPenalty<1< 1Gaps between matched characters reduce confidence
TagMatch≈ 3.0Query also matches a metadata tag (category, alias)
TitleLength1/len\propto 1/\text{len}Shorter titles are more specific

The final posterior is

P=prioriBFi1+prioriBFiP = \frac{\text{prior} \cdot \prod_i \text{BF}_i}{1 + \text{prior} \cdot \prod_i \text{BF}_i}

and lives in Evidence::posterior_probability (scorer.rs:238):

pub fn posterior_probability(&self) -> f64 { let prior = self.prior_odds().unwrap_or(1.0); let bf: f64 = self.entries.iter() .filter(|e| e.kind != EvidenceKind::MatchType) .map(|e| e.bayes_factor) .product(); let posterior_odds = prior * bf; posterior_odds / (1.0 + posterior_odds) }

JSONL ledger output

Because every factor is explicit, the whole ledger serialises cleanly. The to_jsonl method at scorer.rs:260 emits one line per candidate:

{"title":"Open file...","match_type":"Prefix","prior_odds":9.0, "entries":[ {"kind":"WordBoundary","bayes_factor":2.0,"rationale":"matched at word start"}, {"kind":"Position","bayes_factor":0.75,"rationale":"match at pos 2"}, {"kind":"TitleLength","bayes_factor":1.3,"rationale":"short title"} ], "posterior":0.912}

Paste that into a log and you have a complete audit trail of why this candidate was ranked where it was. The DecisionCard widget can render it in-app.

Conformal rank confidence

Posterior probability tells you how relevant each candidate is on its own. It does not tell you whether the top result is meaningfully ahead of the runner-up. A query with two candidates at 0.42 and 0.41 is a dead heat; one at 0.90 and 0.55 is decisive.

ConformalRanker (scorer.rs:973) adds a RankConfidence to each item with a RankStability tag:

StabilityMeaning
StableConfidence interval clearly separates from the neighbours
MarginalIntervals overlap but medians are ordered
UnstableOverlap so severe that the ranking could swap

The palette uses this to decide whether to commit to a top result or ask the user for more characters. It is also what the VoiDebugOverlay renders when you want to inspect how confident the palette is.

Worked example

A user types op file into the palette against these candidates:

CandidateMatchTypeKey factorsPosterior
Open FileWordStartWordBoundary×2, TitleLength↑0.93
Open FolderWordStartWordBoundary×2, no file tag0.84
CompileFuzzyGap penalty, position mid-string0.17

The top two are both WordStart matches, but “Open File” wins because the literal file also matches a tag (BF ≈ 3.0). The rank confidence shows Stable — the margin is ~0.09 on posterior scale, well above overlap.

Integration

use ftui_widgets::command_palette::CommandPalette; pub struct Model { pub palette: CommandPalette, pub palette_open: bool, } impl Model { fn open_palette(&mut self) { self.palette_open = true; self.palette.reset(); } // Not the `Model` trait `update` — this is a plain helper that takes // a raw `Event`. The real trait method takes `Self::Message` and is // typically implemented by bridging `Event` through a `From<Event>` // impl on your `Msg` enum, as shown in [hello-tick](/getting-started/hello-tick). fn handle_event(&mut self, event: Event) { if self.palette_open { if self.palette.handle_event(&event) { if let Some(cmd) = self.palette.chosen() { execute(cmd); self.palette_open = false; } } } } fn view(&self, frame: &mut ftui_render::frame::Frame) { render_main(frame); if self.palette_open { self.palette.render(frame); } } }

You typically pair this with a modal stack push so focus is trapped to the palette input.

Performance

Scoring N candidates is O(N × L) where L is average query length. The palette’s underlying index is an adaptive radix tree (adaptive_radix.rs) so prefix matches short-circuit to O(L). With typical command sets (100 – 10,000 entries), the palette updates in sub-millisecond time per keystroke on a cold cache, well inside a 16ms frame budget.

Pitfalls

  • Mixing scoring systems. If you extend the palette with external scores, convert them to Bayes factors (log-odds, not raw weights). Multiplying a BF by a heuristic score silently re-weights the whole posterior.
  • Forgetting tag evidence. Commands without tags can’t benefit from the ~3.0 factor; their ranking will lag semantically similar tagged commands.
  • Showing all results. At 10,000 candidates, even a sorted list is overwhelming. The palette caps visible results (default 20) — pair it with RankStability to decide when to cut off.
  • Running the ranker on every keystroke blind. The palette caches the last query’s trie cursor; on incremental typing the incremental update is fast. Invalidate the cache only when the query is cleared.

Where next