oculus-v2-first-principles
Oculus V2: Derivation from First Principles
The fundamental algorithm of the universe applied to knowledge graphs
The Invariant
Pause → Fetch → Splice → Continue
This is not a design choice. It is the only way information systems can operate.
| Phase | What Happens | Creates |
|---|---|---|
| Pause | Identify hole/need | Query (Q) |
| Fetch | Retrieve what fills the hole | Value (V) |
| Splice | Insert value into stream | Enriched context |
| Continue | Resume processing | Next pause point |
If we never deviate from this pattern, we cannot have bugs. Every bug is a violation of the invariant.
Derivation
Level 0: The Stream
Everything starts with a stream. Markdown is a stream of characters.
stream: char[] → tokens[] → ast[]The AST is just the stream with structure. Structure is where the holes are.
Level 1: The Hole
A hole is a place where the stream references something outside itself.
{{include:other-node}} ← hole: needs content from elsewhere
${variable} ← hole: needs value resolution
```python[execute=true]` ← hole: needs execution resultHoles create pause points. The stream cannot continue until the hole is filled.
Level 2: The Fill
Filling a hole is always the same operation:
pause → "I need X"
fetch → get(X)
splice → stream[hole_position] = fetched_value
continue → resume stream processingLevel 3: The Level
The level parameter determines how we fetch:
| Level | Query | Returns |
|---|---|---|
| L3 (Seed) | "what are you?" | Raw content |
| L4 (Sprout) | "what do you produce?" | Executed/resolved data |
| L5 (Stalk) | "how do you present?" | Rendered document |
Execute is not a separate operation. It's fetch at L4.
Level 4: The Layer
Writing doesn't mutate. Writing creates a layer.
poke(path, value) → append(layers, diff(current, value))The document at any point = base + layer₁ + layer₂ + ... + layerₙ
Time travel is just: base + layers[0:n]
The Primitives
We need exactly two operations:
peek(stream, path, level)
def peek(stream, path, level=L3):
"""Read from stream at path and level"""
pause() # Identify target
value = fetch(stream, path, level)
return valuepoke(stream, path, value, context)
def poke(stream, path, value, context):
"""Write to stream - creates layer, doesn't mutate"""
current = peek(stream, path, L3)
layer = diff(current, value)
append_layer(stream, path, layer, context)That's it. Everything else is composition.
What We Don't Need
- Graph structure - Emerges from holes that reference other streams
- Separate execute() - It's peek at L4
- Mutable state - Layers are append-only
- Complex caching - Level IS the cache key
Implementation Order
- Stream parser - char[] → ast[]
- Hole detector - Find pause points in stream
- Peek at L3 - Return raw content at path
- Peek at L4 - Execute holes, return data
- Peek at L5 - Render for presentation
- Poke - Create diff layer
- Layer composition - base + layers = current state
The Bet
If we follow the invariant perfectly, we will have zero bugs that aren't input validation. Every bug in V1 was a violation of pause→fetch→splice→continue.
Let's prove it.
The Lottery Ticket Engine
Patterns with holes, running in parallel over the stream.
Pattern Structure
A pattern is a sequence of matchers and holes:
TODO_PATTERN = [
# Matcher: literal token
{'match': 'heading', 'level': 3, 'text_starts': 'TODO:'},
# Hole: capture until end marker
{'hole': 'content', 'until': 'heading|EOF'},
]
SECTION_PATTERN = [
{'match': 'heading', 'capture': 'title'},
{'hole': 'body', 'until': 'heading_same_or_higher|EOF'},
]State Machine
Each pattern matcher is a state machine:
IDLE → [tip matches] → MATCHING → [hole reached] → FILLING → [end marker] → MATCHING → ... → COMPLETE
↓ ↓ ↓
stay IDLE [mismatch] → DEAD [mismatch] → DEADParallel Execution
def lottery_walk(tokens, patterns):
active = [] # Tickets still in play
winners = [] # Completed matches
for token in tokens:
# Check tips - can new patterns start?
for pattern in patterns:
if token.matches(pattern.tip):
active.append(MatchState(pattern, position=0))
# Advance active matches
for match in active:
if match.state == MATCHING:
if token.matches(match.next_expected):
match.advance()
if match.next_is_hole:
match.state = FILLING
elif match.is_complete:
winners.append(match)
match.state = DEAD
else:
match.state = DEAD
elif match.state == FILLING:
if token.matches(match.hole_end_marker):
match.close_hole()
match.state = MATCHING
else:
match.hole_buffer.append(token)
# Prune dead tickets
active = [m for m in active if m.state != DEAD]
return winnersThe Insight
Virtual fences ARE patterns with holes.
The lottery ticket engine doesn't just detect virtual fences - it IS the mechanism. A virtual fence definition is just a pattern with:
- Tip token (what starts it)
- Capture holes (what to extract)
- End markers (what terminates it)
No Separate Detection Phase
In V1, virtual fence detection is a separate pass. In V2:
parse() → lottery_walk() → filled streamOne pass. Patterns and holes unified.
The Full Architecture
The Full Architecture
The 3D Space
Every document exists in three dimensions:
Z (Levels)
↑
│ L5 (document)
│ L4 (data)
│ L3 (code)
│
└────────────────→ X (Stream: tokens)
╱
╱
╱
Y (Layers: v1 → v2 → v3 → ...)| Axis | What It Is | Event Sourcing |
|---|---|---|
| X | Stream of tokens | The content |
| Y | Stack of layers | Temporal changes (versions) |
| Z | Transformation levels | Semantic changes (code→data→doc) |
The Immutable Base
The first save is the only time the full object exists.
def create(slug, content):
"""First and only time we write a full object"""
base = {
'content': content,
'created': now(),
'context': context
}
store_base(slug, base) # This is THE object
return slugAfter this, the base NEVER changes. Ever.
Everything Else is Deltas
def poke(slug, path, value, context):
"""Never overwrites. Always appends a layer."""
current = peek(slug, path, L3)
layer = {
'delta': diff(current, value),
'path': path,
'context': context, # WHY you changed it
'timestamp': now(),
'parent': current_head(slug)
}
append_layer(slug, layer) # Double-entry: the change + the reason
return layer['id']Copy on Write
If you want to modify, you don't modify. You:
- Read the current state (compose base + layers)
- Create a new layer with the diff
- Append the layer
# This is the ONLY write operation
def append_layer(slug, layer):
layers = get_layers(slug)
layers.append(layer)
# The base is untouched. Always.Reading is Composition
def peek(slug, path, level=L3):
"""Compose base + layers to get current state"""
base = get_base(slug)
layers = get_layers(slug)
# Fold layers onto base
current = base['content']
for layer in layers:
current = apply_delta(current, layer['delta'])
# Navigate to path
value = resolve_path(current, path)
# Transform to level
return transform_to_level(value, level)Time Travel is Free
def peek_at_version(slug, path, version, level=L3):
"""Read state at any point in history"""
base = get_base(slug)
layers = get_layers(slug)[:version] # Just take fewer layers
current = base['content']
for layer in layers:
current = apply_delta(current, layer['delta'])
return transform_to_level(resolve_path(current, path), level)The Meaning is in the Edges
Each layer carries:
- What changed (the delta)
- Why it changed (the context)
- When it changed (the timestamp)
- Where it came from (the parent)
"When you look back on your life, you don't see an object. You see all of the transformations."
The history of transformations IS the document's meaning. The base is just the seed.
Double-Entry Accounting
Every write is balanced:
DEBIT: layer.delta (the change)
CREDIT: layer.context (the reason)You can't have one without the other. Every mutation is accounted for.
Two Dimensions of Event Sourcing
Temporal (Y axis): Version history
base → layer₁ → layer₂ → layer₃ → current
↑ ↑ ↑
context context contextSemantic (Z axis): Transformation levels
L3 (code) → L4 (data) → L5 (document)
↑ ↑ ↑
execute render presentBoth are event-sourced. Both are append-only. Both carry meaning.
The Complete Primitives
# CREATE - only way to make a base
create(slug, content, context) → slug
# READ - compose base + layers, transform to level
peek(slug, path=None, level=L3, params=None) → value
# WRITE - append layer with delta + context
poke(slug, path, value, context) → layer_id
# TIME TRAVEL - peek at specific version
peek_at(slug, path, version, level=L3) → value
# HISTORY - get all layers (the meaning)
history(slug) → layers[]That's it. Five primitives. Everything else is composition.
The Invariants
The Invariants
- Base is immutable - Written once on create, never touched again
- Layers are append-only - No layer is ever modified or deleted
- Every layer has context - No change without reason
- Reading is composition - base + Σ(layers) = current state
- Writing is diffing - current - new = layer
- Levels are transformations - L3 → L4 → L5, each derived from below
If we follow these invariants, we cannot corrupt data. The base is the rug. It ties the room together. And nobody pisses on it.
The Attention Model
The Attention Model
Every object is queryable across four dimensions:
The Four Dimensions
K-space (metadata)
╱
╱ tags, types, attributes
╱ "what kind of thing?"
╱
╱ C-space (cache/TTL)
─────────────────────────────→ "is it fresh?"
╱│
╱ │ Z-space (levels)
╱ │ L3 → L4 → L5
╱ │ "how transformed?"
╱ │
╱ ↓
Y-space (generations)
v1 → v2 → v3 → HEAD
"which version?"| Dimension | What It Is | Query Parameter |
|---|---|---|
| K-space | Metadata (tags, types, attributes) | k_filter |
| Z-space | Transformation levels (L3→L4→L5) | level |
| Y-space | Generations (versions, layers) | generation |
| C-space | Cache freshness (TTL) | ttl / cache key |
Agency is Stream→Stream
When we execute a fence, we're not leaving the stream world. Agency is just a transform:
fence content → transform → fence output
(stream) (the agency) (stream)| Fence Type | Stream In | Transform (Agency) | Stream Out |
|---|---|---|---|
| yaml | raw yaml text | parse() | data structure |
| python | source code | execute() | stdout/return |
| sql | query text | run_query() | result rows |
| include | reference | fetch() | included content |
The boundary with external systems is just a black-box transform:
our stream → [boundary] → their magic → [boundary] → stream backWe send stream, we receive stream. The invariant holds at our level. What happens inside Python/AWS/whatever is their business.
Indexing
As we walk the stream, we build two indexes:
Header Index
Maps header paths to stream positions:
headers = {
"config": 42, # position in token stream
"config.aws": 87,
"config.aws.region": 102,
}Fence Index
Maps fence identifiers to position + metadata:
fences = {
"config.aws.yaml": {
"pos": 110,
"type": "yaml",
"label": "aws-config",
"k": {"tags": ["config", "aws"]}
},
"build.python": {
"pos": 250,
"type": "python",
"execute": True,
"k": {"tags": ["build"]}
}
}One pass builds both. Walk the stream, record headers and fences as you encounter them.
Path Resolution
Paths cross boundaries from headers into fences into data:
A.B.C.yaml.x.y.z
├───┤├───┤├─────┤
header fence data
walk id walkdef resolve(path):
parts = path.split('.')
# Phase 1: Walk headers until fence identifier
pos = 0
for i, part in enumerate(parts):
if is_fence_identifier(part):
fence = find_fence(pos, part)
data_path = parts[i+1:]
break
else:
pos = headers[current_path + '.' + part]
# Phase 2: Walk into fence data
if data_path:
content = tokens[fence.pos]
parsed = parse(content, fence.type)
return walk_data(parsed, data_path)
else:
return fenceCaching
Cache Key
cache_key = hash(slug, path, level, generation, params)Different queries produce different cache entries:
peek(node, config, L3)→ one keypeek(node, config, L4)→ different keypeek(node, config, L4, params={env: "prod"})→ different key
TTL Semantics
| Source | TTL | Why |
|---|---|---|
| Token stream | ∞ | Immutable (base never changes) |
| Internal computation | ∞ | Deterministic from stream |
| External fence (no params) | configurable | AWS/API might change |
| External fence (with params) | configurable | Query-specific |
def peek(slug, path, level=L3, generation=HEAD, params=None, ttl=None):
key = cache_key(slug, path, level, generation, params)
cached = cache.get(key)
if cached and not expired(cached, ttl):
return cached.value
# Miss or expired - do the fetch
value = fetch_and_transform(slug, path, level, generation, params)
cache.set(key, value, ttl=ttl)
return valueCache Invalidation
When underlying stream changes (poke to source):
- Invalidate all cache entries for that slug
- Or: content-addressed cache (hash of content = key) → automatic
When TTL expires:
- Next peek triggers fresh fetch
- Old entry discarded
Complete Primitive Set
# CREATE - make base (immutable forever)
create(slug, content, context) → slug
# READ - the universal read
peek(slug, path=None, level=L3, generation=HEAD, params=None, ttl=None) → value
# level: L3 (code) | L4 (data) | L5 (document)
# generation: HEAD | layer_number
# params: if present, execute with params
# ttl: cache lifetime for external fetches
# READ MANY - attention over corpus
attend(q: {k, level, generation}) → values[]
# WRITE - append layer (never mutate)
poke(slug, path, value, context) → layer_id
# HISTORY - get all layers
history(slug) → layers[]
# INDEX - manage Ks
tag(slug, path, k) → updatedpeek does everything:
peek(slug)→ whole document, L3, HEAD, from cachepeek(slug, "config.yaml.x")→ nested data, L3, HEADpeek(slug, "build.python", L4)→ executed, HEADpeek(slug, "api.python", L4, params={q: "test"}, ttl=300)→ execute with params, cache 5min
Middleware on the Edges
Middleware is the transformation that happens between fetch and splice. It's the "processing" step at every boundary crossing.
pause → fetch → [MIDDLEWARE] → splice → continue
↑
transform the valueThe Middleware Pattern
Every boundary has middleware:
def peek_with_middleware(slug, path, level, ...):
# PAUSE - identify hole
target = resolve(path)
# FETCH - get raw value
raw = fetch(target)
# MIDDLEWARE - transform at boundary
for mw in middlewares:
raw = mw.transform(raw, context)
# SPLICE (implicit - return value)
return rawTypes of Middleware
| Boundary | Middleware | What It Does |
|---|---|---|
| Level transition | L3→L4 |
Parse, execute, resolve references |
| Level transition | L4→L5 |
Render, format, present |
| External fetch | aws, http |
Serialize request, parse response |
| Write boundary | poke |
Diff, validate, create layer |
| Read boundary | peek |
Compose layers, cache lookup |
Middleware is Composable
middleware_chain = compose(
validate, # Check input
transform, # Convert format
enrich, # Add metadata
cache, # Store for reuse
log # Record for audit
)
def process_at_boundary(value, context):
return middleware_chain(value, context)Edge Middleware Examples
Inbound (fetch side):
def parse_yaml_middleware(raw, ctx):
"""Transform raw YAML to data structure"""
if ctx.fence_type == 'yaml':
return yaml.safe_load(raw)
return raw
def resolve_refs_middleware(data, ctx):
"""Resolve any {{ref}} patterns in the data"""
if has_refs(data):
return resolve_all_refs(data)
return dataOutbound (splice side):
def add_provenance_middleware(value, ctx):
"""Tag value with where it came from"""
return {
'value': value,
'source': ctx.source_path,
'fetched_at': now()
}
def validate_middleware(value, ctx):
"""Ensure value meets schema before splicing"""
if ctx.schema:
validate(value, ctx.schema)
return valueWhy Middleware Matters
- Separation of concerns - Core algorithm stays pure
- Composability - Stack middlewares like lenses
- Testability - Each middleware is a pure function
- Extensibility - Add new transforms without touching core
The invariant (pause→fetch→splice→continue) stays pristine. Middleware is where all the "impure" work happens - parsing, validation, enrichment, logging.
Middleware IS the difference between levels.
L3 → [parse middleware] → L4 → [render middleware] → L5Each level transition is just middleware application.
Standard Middleware Stack
# The default middleware chain for peek
PEEK_MIDDLEWARE = [
cache_check, # Check C-space first
layer_compose, # Apply Y-space layers
level_transform, # Apply Z-space transformation
reference_resolve, # Fill any remaining holes
provenance_tag, # Mark where it came from
cache_store, # Store in C-space for next time
]
# The default middleware chain for poke
POKE_MIDDLEWARE = [
validate_schema, # Check value against schema
compute_diff, # Create delta from current
create_layer, # Build the layer structure
append_layer, # Write to Y-space
invalidate_cache, # Clear C-space entries
emit_event, # Notify watchers
]The core algorithm is just the loop. Middleware does all the work.
The Observer Model
Observer Coordinate Integration
Implemented 2026-01-11
The observer now points at full coordinates in the space:
Position(
slug: str, # Which document
section: str, # Which section
level: int, # Which level (0-5)
version: str, # HEAD or hash
)
# Convert to/from Coordinate
coord = position.to_coordinate()
position = Position.from_coordinate(coord)Observer methods:
# Peek at any coordinate
observer.peek(slug, section=None, level=None, version="HEAD", params=None)
# Peek at current position
observer.peek_here(section=None, level=None)
# Scrub timeline at current position
observer.scrub(version="abc123") # Returns cached or None for historical
# View at different level (returns new observer)
obs_l3 = observer.at_level(3)
# View at different version (returns new observer)
obs_v2 = observer.at_version("abc123")Cache semantics enforced:
peek(..., version="HEAD")→ compute on misspeek(..., version="abc123")→ return cached or None (no computation)
Observer = Position + Accumulated Context
An observer is not just a user. It's the accumulation of a session:
@dataclass
class Observer:
position: str # Current node slug
context: List[Delta] # Accumulated deltas from journey
user: str # User identity (facts that merge in)
device: str # Device identity (different device = different accumulation)
session_id: str # Session scope for context rolloffDifferent device = different accumulated context. Even if same user, the journey is different.
Navigation Fires Middleware
When you navigate, you have:
- Source: where you were
- Destination: where you're going
- Via chain: intermediate transforms
def navigate(observer, direction):
source = observer.position
destination = resolve_direction(source, direction)
# Get via chain from slot
slot = get_slot(source, direction)
vias = slot.get('vias', [])
# Build navigation context
nav_ctx = {
'source': source,
'destination': destination,
'observer': observer,
'accumulated': observer.context,
}
# Run through middleware chain
for via in vias:
nav_ctx = via.transform(nav_ctx)
# Update observer position
observer.position = destination
observer.context.append(Delta(
action='navigate',
from_=source,
to=destination,
timestamp=now()
))
return nav_ctxVia Chains
Slots can have vias - intermediate transforms:
slots:
- slug: spell-check
label: "Spell Check" # Display label
via:
- middleware: spell-check-engine
destination: self # Back to yourself"Spell Check" navigates you through the spell-check middleware, then back to yourself with the output tokens spliced.
Labels over Slugs
Slots can have labels that render over the raw slug:
slots:
- slug: raw-technical-node-name
label: "Friendly Display Name"
context:
- Why this connection existsThe label is what the observer sees. The slug is what the system uses.
The Viewport
The viewport is what the observer interacts with:
@dataclass
class Viewport:
observer: Observer # Who's looking
rendered: Stream # What they see
controls: List[Action] # What they can doOculus feeds a viewport for an observer.
Observer → [position + context] → Oculus → [peek + middleware] → ViewportThe viewport is composable:
- Multiple panes
- Filtered views
- Transformed presentations
Session Lifecycle
start_session(user, device) → Observer(position=home, context=[])
↓
navigate, peek, poke... → context accumulates
↓
context.rolloff(ttl) → old deltas expire (budget constraint)
↓
end_session() → context archived or discardedClear context = start fresh session. The accumulated context is session-scoped, not permanent.
The Complete Picture
User + Device → Observer → navigates → Middleware fires → Viewport updates
↓
context accumulates (deltas)
↓
budget/TTL for rolloffThe observer IS the accumulation. When you reason about the system:
- Where is the observer?
- What have they seen?
- What can they do next?
Everything else is just peek, poke, and middleware.
Viewport Composition
Request Any Shape
The viewport isn't a fixed view. It's whatever shape you ask for:
# My Dashboard
## Current Node
{{include: $observer.position}}
## Recent History
{{for item in $observer.context[-5:]}}
- {{item.action}}: {{item.from}} → {{item.to}}
{{endfor}}
## AWS Status
{{peek: aws-status:fetch-status, level=L4}}Submit this template → engine fills the holes → viewport renders result.
You compose a markdown document on the fly. We render it.
Observer Storage Layers
The observer's journey is a write-ahead log. It can live anywhere:
┌─────────────────────────────────────────────────────────┐
│ Observer WAL │
├─────────────────────────────────────────────────────────┤
│ Browser: localStorage │
│ Vim/CLI: file system (~/.oculus/session.jsonl) │
│ MCP: proxy captures through the connection │
│ Claude: memory (you get this for free) │
└─────────────────────────────────────────────────────────┘All storage backends implement the same interface:
def append(delta): ...
def replay() -> List[Delta]: ...
def checkpoint(generation): ...Capabilities from Context
If you don't have the data, the operation is a no-op:
def check_schedule(observer):
schedule = observer.context.get('schedule')
if not schedule:
return None # No capability, nothing happens
return schedule.next_event()Navigate to a node that gives you schedule data → now you have the capability.
Context determines what you can do. No permissions model needed - just presence or absence of data.
Observers Are Graph Data
Observers themselves live in the graph:
# Observer is just another document
create(f"observer:{session_id}", {
'user': 'graeme',
'device': 'macbook-home',
'position': 'dashboard',
'context': [],
'viewport': None,
'started': now()
}, "Session started")
# Navigation is just poke
poke(f"observer:{session_id}", 'position', 'new-node', "Navigated south")
poke(f"observer:{session_id}", 'context', append(delta), "Context updated")Who's Looking at What
Query observers like any other data:
# Find all active observers
attend(
k_filter={'type': 'observer', 'active': True},
level=L4
)
# See what someone else is looking at
other_viewport = peek(f"observer:{other_session}", 'viewport', L5)Visibility is free. Observers are data. Data is queryable.
Multi-Poke Transactions
Atomic batch writes - single layer, multiple deltas:
with transaction(slug) as tx:
tx.poke("status", "approved", "batch update")
tx.poke("reviewer", "alice", "batch update")
tx.poke("approved_at", now(), "batch update")
# Commits as ONE layer
# Under the hood:
layer = {
'delta': {
'status': 'approved',
'reviewer': 'alice',
'approved_at': '2026-01-10T...'
},
'context': 'batch update',
'timestamp': now(),
'parent': current_head
}Materialized Views
A materialized view is just a cached composition:
def materialize(slug, name, query, ttl=None):
"""Create a materialized view - snapshot of composed state"""
result = attend(query)
create(f"view:{slug}:{name}", {
'source_query': query,
'materialized_at': now(),
'data': result
}, f"Materialized view of {query}")
if ttl:
schedule_refresh(slug, name, ttl)Checkpoint = materialize the full document at a generation. View = materialize a query result.
Same operation, different scope.
Provenance
Fence Execution Architecture
Derived 2026-01-11
Unified Stack Implementation
Derived 2026-01-11 with Claude
The Insight: Two Systems, One Pipeline
Two parallel level systems existed in Loom:
core/levels.py: L3/L4/L5 (code→data→document) - had the real transformerslayers/core.py: L0-L4 (tokens→evolved→rendered→projected→presented) - had the infrastructure but placeholder transformers
They needed unification. One stack, one pipeline.
The Unified Stack (L0-L5)
L0: File (raw markdown string)
↓ tokenize
L1: Tokens (tokenized stream)
↓ evolve
L2: Evolved (viruses resolved - holes plugged)
↓ render
L3: Rendered (includes spliced - COMPOSITION COMPLETE)
↓ execute
L4: Executed (fences run - code → data)
↓ present
L5: Presented (final output - data → document)Composition ends at L3. Everything after is transformation.
The Narrow Waist: One Compositor
All hole-filling goes through ONE function - the Compositor:
def compose(template, hole_path, value, context) -> CompositionResult:
"""
THE composition surface.
Every interpolation, every include, every hole-fill
goes through here. One place to:
- Record the edge
- Track the composition
- Interface with attention
"""
template_hash = hash(template)
value_hash = hash(value)
result = splice(template, hole_path, value)
result_hash = hash(result)
# Record edge - this is the ONLY place
edge = CompositionEdge(
template_hash=template_hash,
value_hash=value_hash,
result_hash=result_hash,
operation=context.operation,
path=hole_path,
)
composition_log.record(edge)
return CompositionResult(result, edge)One compositor = one place to instrument all composition events.
Coordinate Space
Every peek targets a coordinate:
Coordinate(
slug: str, # Which document
section: str, # Where in document
level: Level, # How far up stack (L0-L5)
version: str, # Which point on timeline (HEAD or hash)
params: tuple, # Execution context (for L4+)
)Cache Semantics
peek(coord) where version=HEAD:
→ cache hit? return it
→ cache miss? compute L0→L1→...→level, cache each, return
peek(coord) where version=historical:
→ cache hit? return it
→ cache miss? return None (no computation - checkpoint or nothing)The cache IS the coordinate space. Each cell either exists or doesn't.
Edge-Based Composition Tracking
When filling holes during composition (L2→L3), record every combination:
CompositionEdge(
template_hash: str, # Hash of template being filled
value_hash: str, # Hash of value being spliced
result_hash: str, # Hash of composed result
operation: str, # "interpolate", "include", etc.
path: str, # Hole path in template
)The sparse matrix:
B@v1 B@v2 B@v3
A@v1 R1 R2 -
A@v2 R3 R4 R5
A@v3 - - R6Only combinations that actually happened are recorded.
Two ways to get a result:
- Cache hit - fast, return materialized view
- Replay - slower, fetch A@vN + B@vM, run composition, get result
Materialized views are optional checkpoints. Edges ARE the history.
File Location
Implementation: loom/stack.py
This replaces both core/levels.py and layers/core.py as the canonical level system.
Edge Tracking at Each Level
Derived 2026-01-11
Every level transition creates edges. Edges are the audit trail.
The Six Levels
| Level | Name | What happens | Edge captures | When to record |
|---|---|---|---|---|
| L0 | Raw | File bytes on disk | Nothing - immutable base | Once on create |
| L1 | Tokens | Parsed into stream | Snip/splice coordinates | Every poke |
| L2 | Interpolated | ${...} holes filled |
Value bindings (path → value) | Only if value changed |
| L3 | Composed | {{include:...}} and {{@label}} spliced |
Include operations (what tokens came across) | Only if included content changed |
| L4 | Executed | Python fences run | Fence call + params + result hash | Every call (params are version key) |
| L5 | Rendered | Tokens → markdown/HTML | Middleware + input hash | Cache key = hash + params |
Edge Types by Level
L1-edge: # Structural mutation (the poke)
type: mutation
coordinate: {slug, path, position}
snipped: {content, hash}
spliced: {content, hash}
context: "why you changed it"
L2-edge: # Value binding (the fetch)
type: binding
hole: "${ana:character-data.emojis}"
value: "🐉⚡🔧"
source: "ana:character-data"
L3-edge: # Content composition (the include)
type: composition
include: "{{@edge-diff|slug=demo-todos}}"
tokens_hash: "abc123"
source_slug: "loom-tools"
L4-edge: # Computation (the execution)
type: execution
fence_label: "edge-diff"
params: {slug: "demo-todos"}
result_hash: "def456"
L5-edge: # Presentation (the render)
type: render
input_hash: "ghi789"
middleware: ["markdown", "highlight"]
output_hash: "jkl012"Replay Semantics
L1 edges are sufficient to reconstruct any token state. Stack them on L0.
current_state = L0_base
for edge in L1_edges:
current_state = apply(current_state, edge.splice_at(edge.coordinate))Verification
At any level: hash(replay(edges)) == hash(current_state)
If verification fails, something mutated outside the edge log. That's corruption.
Cache Keys
| Level | Cache Key | TTL |
|---|---|---|
| L1 | slug + generation | ∞ (immutable) |
| L2 | slug + generation + context_hash | ∞ (deterministic) |
| L3 | slug + generation + includes_hash | ∞ (deterministic) |
| L4 | slug + fence + params_hash | configurable |
| L5 | input_hash + middleware_hash | configurable |
Provenance Query
Why does this look this way? Walk the edge chain backwards through all levels:
L5 rendered from → L4 executed from → L3 composed from → L2 interpolated from → L1 tokens from → L0 rawEvery transformation is accounted for. Double-entry all the way down.
KVQ Interface for Edges
Every level's edges are queryable via the same KVQ pattern:
# Query edges at any level
attend(
query="edges",
level=L2, # or L1, L3, L4, L5
filters={
"slug": "ana",
"from_gen": 0,
"to_gen": 10
}
) → edge_stream
# Query across levels
attend(
query="provenance",
filters={"slug": "ana", "generation": 5}
) → full_edge_chain # L0 through L5The edge store is just another queryable surface. Same primitives, different data.
Provenance IS the ACL
Fence execution permissions are determined by provenance status:
| Status | Execution | Condition |
|---|---|---|
| 🟢 Verified | Global | Anyone can execute, no read required |
| 🟡 Reviewed | Contextual | Must read node first (capability follows recognition) |
| 🔴 Unverified | Blocked | Cannot execute |
Content changes trigger hash drift → auto-invalidates to 🔴 → requires re-verification.
def can_execute(observer, fence):
if fence.provenance == GREEN:
return True # Global capability
if fence.provenance == YELLOW:
return fence.slug in observer.visited # Must have read it
return False # RED = blockedFences as Middleware Chains
A fence isn't "run this code" — it's "run this middleware chain":
source → transform → transform → transform → sink
(L3→L3) (L3→L4) (L4→L5)Level constraints:
- Levels only go UP or STAY SAME (never down)
- Chain terminates at first unsupported level transition
- L3 (code) → L4 (data) → L5 (document) is one-way
Handler Registry (Scopes)
| Scope | When Active | Example |
|---|---|---|
| Global | Always | yaml, python, graphnode (built-ins) |
| Contextual | After visiting registering node | Custom handlers in toolkit nodes |
| Local | Only for that node | Node-specific logic |
Lifecycle Hooks
| Hook | When | Use Case |
|---|---|---|
| on-load | Navigate to node | Context loading, tool activation |
| on-close | Leave node / end session | Cleanup, state save |
| on-save | Node is saved | Provenance injection, validation |
Viruses (Forced Injection)
Some fences auto-inject at lifecycle points:
- Provenance auto-generates on save
- System-wide corrections
- Population-level fixes
Marked with auto-inject: true or registered globally at a lifecycle hook.
Global Fence Index
Fences live in a global KV store:
Key: slug:label or slug:position
Value: FenceEntry {
tokens, language, attrs, hash,
lifecycle: on-load | on-close | on-save,
provenance: RED | YELLOW | GREEN
}First visit to a node indexes its fences globally.
Document
- Status: 🔴 Unverified
Changelog
- 2026-01-11 02:18: Node created by mcp - Deriving Oculus V2 from first principles using the fundamental algorithm
East
slots:
- slug: attention-driven-mind
context:
- V2 derives from attention-driven-mind insightsSouth
slots:
- slug: lottery-ticket-pattern
context:
- V2 uses lottery ticket engine for parallel pattern matching
- slug: loom-user-guide
context:
- Linking V2 first principles to Loom user guideThe Full Architecture
The Invariants
The Attention Model
North
slots:
- slug: bedrock
context:
- V2 architecture derived directly from the mandatory algorithm - bedrock is the
theoretical foundation