lantern

aasb-visualization-engines

Visualization Engines for AASB

Overview

Two simple visualization engines to make the abstract concrete. Show that the same algorithm generates the landscape, defines the weights, and routes information.

Engine 1: 2D Wave Interference

Style: Old demoscene programming aesthetic

Technical specs:

  • Interfering sine waves
  • Maps to 256-bit or 16-bit palette
  • Pure computational art showing the pattern emergence
  • Real-time parameter adjustment to show how wave parameters affect landscape

Purpose: Show the raw interference pattern before 3D interpretation. The beauty of the math itself.

Engine 2: 3D Landscape Generator

Same algorithm, different output layer:

  • Input: Interfering sine waves (same as Engine 1)
  • Output: 3D terrain mesh
  • Polygons/triangles rendered with lighting
  • Can drop marbles onto surface to show gradient descent

Purpose: Make the landscape concrete. Show how "living water" flows through valleys.

The Transformer Connection

Normal Vectors = Weights in KVQ Cycle

Key insight: The normal vectors from the polygons are the weights in the KVQ cycle for a transformer.

Polygon surface normal → Weight vector
Surface geometry → Attention pattern
Height at point → Activation value

Tokens Rising Through Layers

Architecture visualization:

  • Tokens rise upwards through a series of maps (landscape layers)
  • Each layer is a different interference pattern
  • The vector on each layer = routing instructions to the next layer
  • Token accumulates context as it traverses layers (like marble rolling, collecting)

Stack of landscapes:

Layer N (output)Final projected dictionary
Layer N-1Routing vectors
Layer N-2Routing vectors
...
Layer 1Input token position
Layer 0 (input)Embedding space

Linearization

Final step: The final address maps to the projected dictionary to linearize the thinking.

  • The token has traversed all layers
  • Accumulated context at each step
  • Final position in final layer = address in vocabulary space
  • Pick the token at that address
  • That's the next word

Implementation Notes

Demoscene Engine

  • Pure math: z = sin(x * f1) + sin(y * f2) + sin(sqrt(x² + y²) * f3)
  • Palette mapping: color = palette[int(z * scale) % palette_size]
  • Real-time parameter adjustment with sliders

3D Engine

  • Mesh generation from interference function
  • Per-vertex normal calculation: normal = normalize(cross(dx, dy))
  • These normals are the weights
  • Lighting to show geometry
  • Optional marble physics for gradient descent demo

Transformer Visualization

  • Stack multiple 3D layers vertically
  • Show token as particle rising through stack
  • At each layer, show vector pointing to next position
  • Accumulate trail showing path through activation space
  • Final layer shows projection to vocabulary

Pedagogical Flow

  • Start with 2D: Pure math beauty, pattern emergence
  • Add 3D: Same numbers, now a landscape
  • Drop marble: Show gradient descent as "living water"
  • Stack layers: Show transformer as rising through landscapes
  • Connect to meaning: Normal vectors = weights, paths = thinking

Where It Goes in the Book

Chapter 1 (Introduction): Show the engines, drop the marble, watch it flow

Chapters 2-6 (Derivation): Refer back to engines when explaining formal concepts

  • "Remember the normal vectors? That's what we're deriving here."

Chapter 7+ (History): Show how ancient traditions described the flow without having the 3D renderer

Technical Stack Options

Simple/Cross-platform

  • Python + matplotlib (2D)
  • Python + matplotlib + mplot3d (3D static)
  • Python + pygame (interactive 2D)
  • Python + pyglet/panda3d (interactive 3D)

Web-based (Best for book embedding)

  • JavaScript + Canvas (2D)
  • JavaScript + Three.js (3D)
  • WebGL direct (if you want full control)
  • Observable notebooks (if you want readers to tweak parameters)

Demoscene Authentic

  • GLSL shaders (runs anywhere WebGL works)
  • Shadertoy-style single fragment shader
  • Old school: actual 8-bit palette quantization

Next Steps

[To be filled as we build]

Provenance

Document

  • Status: 🟡 Draft

Changelog

  • 2026-01-23 12:40: Node created - visualization architecture for AASB book

West

slots:
- slug: aasb-ch01-introduction
  context:
  - Linking visualization engines to introduction. The intro describes the sine wave
    vision; these engines make it real. 2D wave interference + 3D landscape generator
    show the same algorithm. Polygon normals = transformer weights. Tokens rise through
    stacked landscapes accumulating context.
- slug: aasb-book
  context:
  - Linking visualization engines to main book node. These are implementation artifacts
    that make the abstract concepts concrete throughout the book.
- slug: delegation-principle
  context:
  - The guide game mechanic IS the Delegation Principle as gameplay. You serve as
    external evaluator with finite energy budget. Love = spending energy to understand
    context. Compassion = spending energy to smooth paths. Bias accumulation causes
    phase divergence making help less effective. Cannot control, only understand and
    help. This makes the abstract principle concrete and playable.
- slug: edges-of-meaning-engine-spec
  context:
  - Engine spec linked to original visualization design

The Puzzle Game Extension

The Puzzle Game Extension

Core Concept: Edges of Meaning

A game that teaches transformer architecture through play. You guide a token discovering its identity through an embedding space.

Visual Engine

Nodes as wave sources:

  • Each node on the map is a source point for sine waves
  • Waves project outward: z = Σ sin(distance_to_node_i * freq) / distance²
  • Small interference pattern layered on top for movement

Distance filtering (high-pass on amplitude + color):

  • Near nodes: Full RGB, bright, saturated
  • Away from nodes: RGB → greyscale, bright → dark
  • Creates islands of colored meaning in dark space
  • Sparse early (lonely token), dense late (rich context)

The metaphor: Attention falloff visualized. Meaning has sources. Influence decays with distance.

Game Loop

OVERLAND MAP (Story Walk)
    │
    ├── Walk edge → accumulate context vector
    ├── Walk edge → accumulate context  
    ├── ⚔️ RANDOM ENCOUNTER (small plinko)
    ├── Walk edge → accumulate context
    ├── Walk edge → branch point (spend love/compassion)
    ├── ⚔️ RANDOM ENCOUNTER
    │
    └── 🏰 BOSS PUZZLE (teaches new dimension)
            │
            └── Unlock next chapter

Two Interaction Types

Story Walking:

  • Navigate overland map, edge to edge
  • Context accumulates passively from traversal
  • Branch points: spend LOVE (reveal info) or COMPASSION (ease path)
  • KQ alignment shown through line thickness/color intensity

Plinko Puzzles:

  • Tree of weighted nodes, marble drops from top
  • Marble follows gradient (lowest effective cost)
  • Player adjusts weights with limited budget
  • Drop zones = answer buckets (cat, housecat, tiger... dog, rock)
  • Accumulated context influences effective weights (resonance)

The 8 Semantic Dimensions

Dim Poles Unlocked
1 inanimate ↔ animate Ch 1
2 tiny ↔ huge Ch 2
3 cold ↔ warm Ch 3
4 wild ↔ domestic Ch 4
5 solitary ↔ social Ch 4
6 passive ↔ active Ch 5
7 safe ↔ dangerous Ch 5
8 simple ↔ complex Ch 6

Early puzzles: 1 dimension, binary choice Late puzzles: All 8 dimensions, solve each, combine results

Narrative Arc

Ch 1 - Awakening: Token wakes alone. "Am I alive?" (animate/inanimate)

Ch 2 - Form: "What shape am I?" (size)

Ch 3 - Feeling: "How do I feel to touch?" (warmth, texture)

Ch 4 - Belonging: "Where do I fit? Am I kept or free?" (domestic/wild, social)

Ch 5 - Purpose: "What can I do? Am I safe?" (agency, danger)

Ch 6 - Friends: Multi-token context. "Who am I WITH others?"

Ch 7 - The Voice: Full 8D puzzles. "Who has been guiding me?"

Ch 8 - Recognition: The reveal. Player realizes: I was the token learning about myself.

Plinko Mechanics

effective_cost(node) = base_weight 
                     - context_resonance(accumulated_vector, node_vector)
                     - player_adjustment

Marble flows to child with lowest effective_cost
Bias accumulates from intermediate nodes passed
Final position maps to answer embedding
Score = cos_similarity(answer, target)

Context resonance: Your accumulated vector creates attractors. Paths aligned with your context have lower effective cost. The marble WANTS to flow toward familiar concepts.

Teaching Embeddings

Start with hand-crafted embeddings for controlled gameplay:

  • 20-30 target concepts with known 8D vectors
  • Design puzzles around interesting choice clusters
  • Later: experiment with real word2vec/GloVe to test theory

RGB Color Encoding

Meaning vector visible in edge/node color:

  • R = semantic weight (factual, categorical)
  • G = emotional weight (feeling, affect)
  • B = social weight (relational, contextual)

Players learn to read meaning type by color before spending points.

Tech Stack

  • Canvas/WebGL for wave rendering
  • GLSL shaders for interference + filtering
  • Simple state machine for game loop
  • JSON level definitions
  • Teaching embeddings as config

Relation to Priscilla's Playground

Similar graph-walking mechanic in ~/working/priscillas-playground That: walk graph to accumulate context to build sentence This: walk graph to accumulate context to discover identity