pattern-looking-glass-development
Looking Glass Development
Summertime and the livin's easy And Bradley's on the microphone w/ RAS MG All the people in the dance will agree That we are well qualified to represent the LGD
— Sublime (adapted)
"In Wonderland, the reflection validates reality."
A development methodology where documentation, tests, and implementation are unified in a single living artifact. The tests ARE the documentation. The documentation PROVES itself.
┌─────────────────────────────────────────────┐
│ LOOKING GLASS DEVELOPMENT │
├─────────────────────────────────────────────┤
│ │
│ 1. PROCLAIM → Document the feature │
│ 2. WITNESS → Embed tests in docs │
│ 3. RECKON → See all tests fail │
│ 4. FULFILL → Code until green │
│ 5. LIVE → Artifact stays current │
│ │
│ "The reflection validates reality" │
│ │
└─────────────────────────────────────────────┘Slots
North
slots:
- patternsSouth
slots:
- cmd-poke
- pipeline-middleware
- harness-vitest
- harness-pytest
- harness-rspec
- code-windowEast
slots:
- pattern-embedded-tests
- fence-graphnode-testWest
slots:
- case-the-provenance-stoplight
- pattern-propagation-automata
- cache-ttl-configuration
- oculus-tagging-taxonomyThe Pattern
LGD uses a two-node pattern: a documentation node that embeds live test results from a companion test node.
Documentation Node
The documentation node contains your feature documentation with a graphnode fence that pulls in live test results:
# My Feature
Description of the feature...
## Verification
\`\`\`graphnode:my-feature-test:table
TableConfig:
array_path: tests
columns:
Test: name
Status: status
format: markdown
\`\`\`The graphnode:slug:table fence executes the test node's ## fetch section and renders the output as a table via the TableExtractor middleware.
Test Node (Graphnode)
The test node must follow the standard graphnode anatomy (see [[pattern-graphnode-anatomy]]):
## configsection - YAML with default parameters (required, even if justpath: null)## fetchsection - Python fence with[execute=true]that stores output inresultvariable
# my-feature-test
## config
\`\`\`yaml
path: /path/to/project
test_file: tests/test_feature.py
\`\`\`
## fetch
\`\`\`python[execute=true]
import subprocess
import re
# Run tests (config dict is injected automatically)
proc = subprocess.run(
['python', '-m', 'pytest', config.get('test_file'), '-v', '--tb=no'],
cwd=config.get('path'),
capture_output=True,
text=True,
timeout=60
)
# Parse results
tests = []
for line in proc.stdout.split('\n'):
match = re.match(r'.*::(\w+)\s+(PASSED|FAILED)', line)
if match:
name, status = match.groups()
tests.append({
"name": name,
"status": "✅" if status == "PASSED" else "❌"
})
passed = sum(1 for t in tests if t['status'] == '✅')
# THE CONTRACT: Store output in `result` variable
result = {
"tests": tests,
"summary": {
"total": len(tests),
"passed": passed,
"failed": len(tests) - passed,
"success_rate": f"{(passed/len(tests)*100):.0f}%" if tests else "0%"
}
}
\`\`\`Linking
- Documentation node links south to test node
- Test node links north to documentation node
- Test node is tagged
pattern:lgd