loom-edges-model
Loom Edges Model
Two operations. That's it.
The Primitives
| Op | Args | Effect |
|---|---|---|
| snip | (start, end) |
Remove tokens in range [start, end) |
| insert | (position, tokens) |
Splice tokens at position |
Everything else derives from these.
Layer Structure
Each layer is just metadata about what changed:
layer:
parent: abc123 # Hash of previous layer
ops:
- [insert, 42, [t1, t2, t3]]
- [snip, 100, 105]Walking the Stack
Forward (reconstruct current): Apply ops in sequence Backward (reconstruct previous): Reverse the ops
Reversing:
insert(pos, tokens)→snip(pos, pos + len(tokens))snip(start, end)→insert(start, removed_tokens)
To reverse a snip, you need the removed tokens. Store them in the op:
- [snip, 100, 105, [removed, tokens, here]]Now every op is reversible.
Why Tokens
Not lines. Not characters. Tokens.
Tokens are the semantic unit:
- A heading is one token
- A fence is one token
- A paragraph is one token
Operations at token granularity:
- Coarser than character edits (less noise)
- Finer than section rewrites (precise)
- Aligned with stream processing (natural boundaries)
Composition Edges
When a fence executes, it produces tokens. The edge records:
edge:
coordinate: "slug:fence.version.execution.step"
input_hash: hash(input_tokens)
template_hash: hash(fence_content)
output_hash: hash(output_tokens)
ops:
- [snip, fence_start, fence_end] # Remove the fence
- [insert, fence_start, output_tokens] # Insert resultThe edge IS the transformation. Template + input → output.
Same (template_hash, input_hash) always produces same output_hash.
Invariants
- Deterministic: Same ops on same base = same result
- Reversible: Every forward op has an inverse
- Composable: Ops can be chained, squashed, rebased
- Content-addressed: Layers identified by hash of (parent + ops)
The Stack
L5 (text) ← insert prose, snip code
L4 (data) ← insert resolved values, snip holes
L3 (code) ← insert execution results, snip fences
L2 (tokens) ← the base representation
L1 (bytes) ← serialization (not usually operated on)Each level is just ops on the level below.
Visualization
Base: [H1] [P] [FENCE] [P] [H2] [P]
↓
Op: snip(2,3), insert(2, [YAML, TEXT, YAML])
↓
Result: [H1] [P] [YAML] [TEXT] [YAML] [P] [H2] [P]Future: Concurrent Ops
Future: Concurrent Ops
With position-based ops, concurrent edits need transformation:
- If I insert at 10, and you snip 5-8, my position shifts to 7
- Operational transformation (OT) or CRDTs handle this
For now: single-writer, linear history. OT later if needed.
Dimension Architecture
One file per dimension. WAL with snapshots. TTL-based rolloff.
The Three Dimensions
Each node can have delta files for different concerns:
| Dimension | File | Purpose | TTL |
|---|---|---|---|
| versions | slug.versions.json |
Document edits | Forever |
| executions | slug.executions.json |
Fence runs | Medium |
| levels | slug.levels.json |
L3/L4/L5 transforms | Short |
File Structure
Each dimension file is a write-ahead log:
# slug.versions.json
{
"slug": "my-node",
"dimension": "versions",
"deltas": [
{"parent": null, "ops": [...], "hash": "abc123", "ts": "..."},
{"parent": "abc123", "ops": [...], "hash": "def456", "ts": "..."}
],
"snapshots": [
{"hash": "abc123", "position": 0, "tokens": [...]},
{"hash": "xyz789", "position": 100, "tokens": [...]}
],
"head": "latest_hash"
}Walking History
From scratch: Apply deltas from beginning From snapshot: Jump to snapshot, apply deltas forward Backward: Reverse deltas
Base → Δ₁ → Δ₂ → [Snapshot] → Δ₃ → Δ₄ → HEAD
↑ ↑
materialize currentSnapshots
Insert snapshots when:
- Every N deltas (e.g., 100)
- When delta chain gets heavy
- On explicit request
Snapshots ARE just deltas with full token state cached.
TTL and Rolloff
Per-dimension TTL in node metadata:
loom:
ttl:
versions: null # Keep forever
executions: 30d # 30 days
levels: 1d # 1 day
rolloff:
max_deltas: 1000 # Keep at most N
max_size: 10MB # Or total sizeWhen rolling off:
- Ensure snapshot exists at rolloff point
- Delete old deltas
- Snapshot becomes new base
Like Jenkins: keep recent, roll off old, maintain snapshots.
Related
- @loom-workflow-guide - Workflows produce token sequences via ops
- @lebowski-architecture - Event sourcing for state (same principle)
- @pause-fetch-splice-continue - The fundamental algorithm
Provenance
Document
- Status: 🟡 Draft
- Author: Claude + Graeme
Changelog
- 2026-01-12: Initial model captured from conversation
East
slots:
- slug: loom-workflow-guide
context:
- Sibling concepts - workflows produce edgesNorth
slots:
- slug: loom-user-guide
context:
- Edges model as child of main loom guide