wanderland-twelve-technical
The Twelve Principles
Architecture for programmable attention and skill transfer
I. The Answer Already Exists
Large language models are not intelligence—they are terrain. Every possible output already exists, distributed across probability space. The model is the landscape; navigation through that space is the product.
Architecture: The trampoline layer (Layer 0) provides the probability surface. Layers 1-6 perform navigation.
II. Better Questions, Better Answers
Context shapes output through constraint satisfaction. A precise prompt prunes the probability graph, eliminating paths before traversal. Better framing yields exponentially better results without touching weights.
Architecture: The soul socket (Layer 1) is the primary context injection point. Query quality determines traversal efficiency.
III. Attention is the Product
III. Attention is the Product
The level parameter IS Q (query). The fence IS K (key). The result at that level IS V (value).
L3 = Code → Q: "what are you made of?" → V: fence definition
L4 = Data → Q: "what do you produce?" → V: executed result
L5 = Document → Q: "how do you present?" → V: rendered outputSame K (fence identity). Different Q (level). Different V (representation).
Scope maps to attention scope:
- Single fence = single head
- Single page = multi-head (all fences on page)
- Whole system = global attention over corpus
The cache IS persisted attention results. You don't re-attend to what you've already attended to.
Architecture: This isn't a system that uses attention. This IS attention—externalized, persistent, queryable. RAG as native attention, not retrieve-then-generate.
IV. Expertise Transfers
Skills are deltas: skill = expert_state - baseline_state. This difference vector can be captured from observation, stored compactly, and injected into any session via activation steering or soft prompts.
Architecture: Skill vectors live in the same embedding space as model activations. Injection is addition.
V. Watch, Learn, Apply
V. Watch, Learn, Apply
Every poke accepts a context parameter. That context is logged, indexed, queryable. Not just WHAT changed, but WHY. Feed the intent log back into future decisions.
Architecture:
poke(slug, path, value, context="Adding task from sprint planning")
→ activity log captures: slug, path, old_value, new_value, context, timestamp
→ query the log: "show all pokes where context mentions 'sprint'"
→ cluster similar intents, extract patternsThe log IS the reasoning. Query it, learn from it, replay it.
VI. Identity is Portable
VI. Identity is Portable
Everything interpolates. {{slug.path}} references resolve at render time. Change the referenced value; every document using it transforms.
Architecture:
# config node
environment: production
api_url: https://api.prod.example.com
# any document referencing {{config.environment}}
# automatically shifts when config changesOne linchpin document shifts entire document classes. Identity is a variable, not frozen state. Swap the variable, transform the behavior.
VII. One Pattern, Everywhere
VII. One Pattern, Everywhere
The universal signature: (name, arguments, context). That's it.
peek(name, args, context) → Read
poke(name, args, context) → Write
execute(name, args, context) → RunHomoiconic: The markdown that describes the tool IS the tool. The prose documenting the fence lives in the same file as the fence. They're the same artifact. Change the documentation, you're changing the thing.
Fences call fences call fences. Same signature all the way down. Tools are tools are tools.
VIII. See It Before You Use It
VIII. See It Before You Use It
You read about the tool ON the tool. They're the same file.
peek("my-tool-node")
→ Returns markdown with:
- What it does (prose)
- How to call it (the fence itself)
- Examples (more fences)
→ Read it to understand it
→ Execute it to use it
→ Same address, same contentArchitecture: The thing that IS the thing is the thing you're reading about. No separate documentation layer. No drift between description and behavior.
IX. Same Process, Every Scale
IX. Same Process, Every Scale
Git-like architecture:
Local graph (your laptop)
↓ sync
Team graph (shared repo)
↓ sync
Org graph (central)
↓ query fan-out
Every node in parallel → results mergeRouting is the only knob:
- Want to deep-dive every document? Pay the compute to parallelize.
- Want to go deep on one path? Pay the TTL to wait.
Serverless-native: Stateless execution. Every node can respond independently—fan-out queries hit the entire corpus in parallel, fan-in merges results. Compute elasticity without coordination overhead.
X. The System Improves Itself
X. The System Improves Itself
Tool composition: Create a fence → it's a tool. Compose tools → it's a tool. Three levels deep is sufficient for arbitrary capability.
Tool → calls Tool → calls ToolMiddleware: Every fence call is chainable. Data comes out the same shape; middleware transforms it 14 different ways depending on who's asking. Same source, different views.
Event sourced: The graph is the source of truth. Every poke is an event. Every read is a projection. The log IS the database.
Distributed: Built for teams to share data without constant fetching. Local copies, sync when needed, fan-out when querying. Your graph is your cache; central is the ledger.
XI. Value is Contrast
XI. Value is Contrast
All documents behind the same interface. Same attention heads seek everything. Same peek/poke/execute on every node.
Architecture: Uniform access makes contrast visible. Pattern finding isn't a feature—it's a natural consequence of consistent query semantics across the corpus. Use the system; patterns emerge. Insights are exhaust, not product.
XII. Distance is Opportunity
XII. Distance is Opportunity
Scan entire knowledge space in one query. Fan-out hits every node. Model finds connections human traversal would miss.
Query fans out → hits every node → model finds connection
→ "These three things relate in a way you never noticed"Architecture: Parallel query finds the paths you'd never think to walk. Infrastructure docs linked to onboarding guide linked to incident postmortem. Patterns even you haven't seen. That's where asymmetric returns live.
The Stack
The Stack
┌─────────────────────────────────────────────────────────────┐
│ execute (run fences, virtual fences, labeled tools) │
├─────────────────────────────────────────────────────────────┤
│ poke (write with context, interpolation, cascading) │
├─────────────────────────────────────────────────────────────┤
│ peek (read at any level: code/data/document) │
├─────────────────────────────────────────────────────────────┤
│ Graph (nodes, links, fences, virtual fences) │
├─────────────────────────────────────────────────────────────┤
│ Foundation Model (the landscape) │
└─────────────────────────────────────────────────────────────┘Three operations. Everything else emerges.
Each document node is a programmable attention head. Queries fan out across the graph—embarrassingly parallel. Every fence can be queried independently at any level; results compose. Organization-wide attention with no coordination overhead.
Model is landscape. Navigation is product. Tools compose. Identity swaps. System improves. One algorithm.
Provenance
Document
- Status: 🔴 Unverified
Changelog
- 2026-01-09 21:39: Node created by mcp - Creating technical version of twelve principles - Full architecture details
West
slots:
- slug: wanderland-twelve-executive
context:
- Linking executive to technical as siblings
- slug: wanderland-twelve-user
context:
- Cross-linking all twelve docsEast
slots:
- slug: wanderland-twelve-grounding
context:
- Linking technical to grounding as siblingsNorth
slots:
- slug: wanderland-twelve
context:
- Linking parent to technical variant