lantern

unified-peek-poke-cache-design-20260105

unified-peek-poke-cache-design-20260105

Attention-based ISA for Wanderland, plus caching semantics.

1. Core ISA: peek / poke over fences

Treat everything as a fence (potential hole). Two primitives:

  • peek(path, params?) → value | needs- Resolve path in the graph (slug + path + level).
  • If node is data: return data (no side effects).
  • If node is fence:- If required params are missing → return a needs descriptor: parameter schema, defaults, doc.
  • If params are present → run fence statelessly and return result (no materialization).
  • poke(path, payload) → value- Resolve path.
  • If payload is data: write data as the new value at that level; return it.
  • If payload is a fence invocation { params, … }:- Run fence,
  • Materialize result into the cache/graph at the appropriate level (L0/L4, see below),
  • Return result.

Everything else (middleware, execution helpers) is expressed in terms of these two.

2. Caching semantics

Assume layered caches / render levels:

  • L0: raw storage (sprout / source).
  • L4: rendered data cache for fences (data result of fence).
  • L5: middleware/rendering cache (e.g., HTML, report output, etc.).

2.1 Data/fence cache (L4)

  • Cache key (L4):- K_data = (fence_id, params_serialized)
  • peek(path, params):- Resolve → (fence_id, level)
  • If level >= L4 and K_data exists in cache → return cached value.
  • Else run fence statelessly and return value; do not write to cache.
  • poke(path, { params }):- Resolve → (fence_id, level)
  • Run fence to get value.
  • Materialize value at L4 with key K_data.
  • Return value.

Effects:

  • Stateless call: peek only.
  • Stateful/materialized call: poke (compute + store at L4).
  • Re‑reading via peek at rendered level with same (path, params) hits cache; reading at lower level recomputes.

2.2 Middleware/render cache (L5)

  • Middleware is a pure transform over data: M_chain(value) → rendered.
  • Middleware identity: hash of middleware config or fence chain:- M_id = hash(middleware_chain_config)
  • Cache key (L5):- K_render = (K_data, M_id) = (fence_id, params_serialized, M_id)
  • Rendering pipeline:- Get data for (fence_id, params) via peek/poke and L4.
  • Check L5: if K_render exists → return cached rendered output.
  • Else compute rendered = M_chain(data); store at L5 under K_render; return.

So you get:

  • L4: “database value” for that fence + params.
  • L5: “view” (HTML/report/etc.) for that data under a particular middleware stack.

3. When to use peek vs poke

  • Use peek when:- You want a pure read (no persistence), or
  • You’re inspecting a fence’s schema (“what parameters do you need?”), or
  • You’re happy to let L4/L5 caches serve if present, but don’t want to force recompute.
  • Use poke when:- You want to materialize a result (e.g., expensively computed fence) at L4.
  • You want to bust and rebuild cache for a fence at L4 (poke recomputes & overwrites).
  • You’re intentionally updating “current reality” of that node’s value.

Think:

  • peek = attention only.
  • poke = attention + agency (change the universe).

4. Cache busting & reruns

4.1 Bust cache for one fence (data)

  • To force recompute for a specific (fence_id, params) at L4:- Delete K_data entry from L4 (or mark invalid).
  • Next peek at rendered level will recompute (stateless and not store) unless you call poke again to re‑materialize.
  • To force recompute and materialize:- Call poke(path, { params }) explicitly; it recomputes and writes L4.

4.2 Bust middleware cache only

  • To rerun middleware while keeping data:- Delete K_render = (K_data, M_id) from L5.
  • Next render call will:- reuse L4 data (fast),
  • recompute middleware,
  • refresh L5.

4.3 Clear entire caches

  • Clear L4 only: wipe all K_data entries.- Next reads recompute from source fences; L5 entries depending on them should be invalidated as well (or checked via versioning).
  • Clear L5 only: wipe all K_render entries.- Data persists; all views will recompute on next use.
  • Full rebuild: clear L4 + L5 and rerun as needed; this is your “go back to previous stage and re‑render universe” move.

5. Quick decision guide

  • “I just want to know what’s here.” → peek(path)- If fence + no params → schema/needs.
  • If fence + params → computed value, non‑materialized.
  • If data → data.
  • “I want to compute and store this result for later reuse.” → poke(path, { params })- Data result written at L4.
  • Future renders can reuse it or build L5 views on top.
  • “I changed underlying source or config; rerun this fence.” →- poke(path, { params }) to recompute and overwrite L4 (and invalidate dependent L5).
  • “I changed how I render (middleware), but data is fine.” →- Clear L5 for relevant M_id or K_render and call render again; L4 still valid.

This gives you:

  • A uniform peek/poke ISA for all graph operations.
  • Clear materialization semantics (poke vs peek).
  • Two-level caching (data vs rendered) keyed by fence id, parameters, and middleware identity—exactly analogous to the layered caches between TCP stream and final DOM.

Sources [1] CleanShot-2026-01-05-at-01.16.31.jpeg https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/attachments/images/80366274/e2c5c264-5e65-47d0-baec-9cabfe813c2d/CleanShot-2026-01-05-at-01.16.31.jpeg

Provenance

Document

  • Status: 🔴 Unverified

North

slots:
- context:
  - ISA specification implements the RAG-as-native-attention pattern
  slug: rag-as-native-attention

West

slots:
- context:
  - ISA implements peek=attention, poke=attention+agency from the thesis
  slug: bidirectional-attention-thesis