lantern

why

Why - The Philosophy Behind the Architecture

How This System Thinks

This system is designed around spatial, structural, and relational patterns rather than linear/sequential ones.

Shape-First, Not List-First

The structure of things is the primary organizing principle. We don't see:

item1, item2, item3...

We see:

┌─ container
│  ├─ category_a
│  │  ├─ item1
│  │  └─ item2
│  └─ category_b
     └─ item3

The topology matters more than the sequence.

Location = Meaning

Things should be defined where they belong in the conceptual space, not in a separate lookup table.

  • ❌ "Here's a list of parameters, each with a path string pointing to where they go"
  • ✅ "The parameter IS at the location in the graph where it will be used"

This is why throughout the system: define the shape and put what you want at the edge where it should be.

Emacs Org Mode Thinking

The architecture mirrors Org Mode's tree-structured thinking. Everything is:

  • Hierarchical - headings within headings, nodes within nodes
  • In-place - properties attached to nodes, not in separate tables
  • Navigable by structure - folding, refiling moves subtrees

Org mode patterns appear throughout:

  • Hidden headers with toggle = org-cycle visibility
  • Chest item management = org-refile
  • Parameters with prompts = org-capture templates
  • Variable substitution = org-mode properties

Graph-Native

This is a graph and network system, not lists and tables:

  • Nodes have properties at their location
  • Edges connect related concepts
  • Navigation is spatial (cardinal directions, graph traversal)
  • Memory is positional (you remember WHERE things are)

This is why:

  • Oculus navigation is spatial (north/south/east/west)
  • Amygdala patterns mirror data shape exactly
  • Detective cases are graph nodes, not database rows
  • Parameters are shaped, not serialized

Simultaneous, Not Sequential

Problem-solving happens through parallel pattern matching. The system:

  • Sees both structures at once
  • Identifies where they overlap
  • Checks matches in place

Not: "First do A, then B, then compare."

Instead: "Walk them together, check as you go."

This is DFS traversal - walk both graphs simultaneously, match at each node.

Composition Over Configuration

The system prefers:

  • Templates that compose - ${var} pulls from anywhere in the graph
  • Shapes that nest - watchers contain patterns contain triggers
  • Self-referential systems - graph watches itself, publishes itself

Not:

  • Flat config files with all options listed
  • Explicit orchestration layers
  • Centralized controllers

The Pattern Is The Interface

Structure should not be described separately - the structure should be the description.

# We avoid this (describing structure)
parameters:
  - path: "a.b.c"

# We prefer this (structure IS the thing)
a:
  b:
    c: ...

The shape is self-evident. No translation layer needed.

Why Serialization Is Avoided

Serialization breaks spatial relationships.

When you serialize a.b.c as a string:

  • You've flattened a 3D concept into 1D
  • You've created a reference to a location instead of being at the location
  • You need a parser to reconstruct the shape
  • The structure is invisible until you parse it

Instead: just put it where it belongs in the first place.

The Design Philosophy

"Everything should be where it belongs in the shape, with metadata at the edges."

This is why:

  • ✅ Amygdala constraints mirror data shape exactly
  • ✅ Parameters defined at their usage location in the graph
  • ✅ Components have properties where they're used (_open, _type)
  • ✅ Cross-references use graph paths, not IDs
  • ✅ Watchers live at the nodes they monitor

The Universal Pattern

Throughout the entire system:

  • Amygdala: Shape + _constraints at constraint nodes
  • Parameters: Shape + _prompt, _type, _choices at parameter nodes
  • Components: Shape + _open, _size_limit at component nodes
  • Watchers: Shape + _enabled at watcher nodes

Define the shape where it belongs in the graph, add underscore metadata at the edges.

Why This Matters

This is a spatial memory system for machines that matches how graph-native thinking organizes information.

Most systems are built by sequential thinkers for sequential computers. This system is built for graph-native thinking:

  • Memory is spatial (hippocampus stores locations)
  • Attention is navigational (move through graph space)
  • Patterns match by shape (amygdala compares topologies)
  • Actions are contextual (available based on where you are)

This is not a relational database. This is a graph database for consciousness.

Core Principles

  • Shape mirrors intent - Structure shows meaning without explanation
  • Location is semantic - Where something is defines what it means
  • Composition is natural - Parts combine without glue code
  • Navigation is spatial - Movement through concept space, not file hierarchy
  • Patterns are structural - Match by shape, not by string comparison
  • Metadata lives in-place - Properties attached where they're used

Examples in Practice

Amygdala Pattern Matching

# Data shape
jenkins:
  failure_count: 5
  pipeline: deploy-prod

# Constraint shape (mirrors exactly!)
jenkins:
  failure_count: ">= 3"
  pipeline: "contains 'prod'"

DFS walk both simultaneously. Shape match = pattern match.

Parameters

# Parameter definition (shaped)
parameters:
  puppy:
    say:
      _prompt: "What should puppy say?"
      _type: text

# Usage (same shape!)
workflow:
  args:
    message: ${params.puppy.say}

No serialization. Location maps directly.

Hidden Chests

# Chest header (in-place)
## _chest:toy-box

contents:
  - ball
  - bone

_open: false
_size_limit: 50

Properties where they belong. Toggle visibility in place.

The Result

A system where:

  • You navigate to concepts, not files
  • Patterns match by shape, not regex
  • Parameters live where they're used, not in config files
  • The graph watches and publishes itself, no orchestrator needed
  • Structure is self-documenting, no separate schema

Graph-native architecture for graph-native thinking. 🧠✨

See Also

  • tutorial-room - Interactive introduction to the system
  • amygdala-intro - Pattern matching by shape
  • alice-in-wanderland - Navigation philosophy