lantern

oculus-testing-guide

Oculus Testing Guide

Comprehensive guide to testing strategies in the Oculus consciousness architecture.

Testing Philosophy

Living Documentation: Tests that validate features as you read about them. Documentation proves its own concepts by embedding executable tests.

Third-Person Architecture: Tests viewed from observer position - see all test results at a glance through pattern recognition rather than sequential execution.

Graphnode Composition: Tests as reusable components that can be consumed anywhere in the graph with middleware formatting applied.

Testing Patterns

1. Embedded Tests Pattern

Tests embedded directly in feature documentation, running automatically when you visit the section.

Benefits:

  • Documentation always validated against current implementation
  • Test failures immediately visible when reading about features
  • Tests serve as executable examples
  • Encourages keeping docs and tests in sync

Pattern: pattern-embedded-tests node documents this approach

2. Standalone Graphnode Tests

Tests written as standalone graphnodes that can be consumed anywhere.

Structure:

# test-suite-name

```yaml
metadata:
  type: graphnode
  no_cache: true  # Always run fresh

config

api_base: "http://localhost:7778/api/oculus"
timeout: 2

fetch

import requests

api_base = config.get('api_base')
test_results = []

# Run tests, collect results in structured format
# ...

result = {
    "tests": test_results,  # Array of test objects
    "summary": {"total": n, "passed": m, "failed": k}
}

❌ Fence Execution Error: name 'n' is not defined Traceback (most recent call last): File "/app/oculus/providers/python_provider.py", line 468, in execute exec(fence_content, exec_globals, exec_locals) File "", line 11, in NameError: name 'n' is not defined

template

# Test Results

{% for test in tests %}
- {{ test.status }} **{{ test.test }}**: {{ test.result }}
{% endfor %}

**Consumption**:
```markdown
```graphnode:test-suite-name:table
TableConfig:
  array_path: tests
  columns:
    Status: status
    Test: test
    Result: result
  format: markdown

### 3. Test Data Format

Tests should output structured data for table rendering:

```python
result = {
    "tests": [
        {
            "test": "Test Name",
            "status": "✅" | "❌",
            "operation": "peek | poke | tree | ...",
            "path": "node:path",
            "result": "Pass | Fail | Error"
        },
        # ... more tests
    ],
    "summary": {
        "total": 10,
        "passed": 8,
        "failed": 2
    }
}

This format works well with :table middleware for automatic table formatting.

Testing Features

Testing Peek Operations

Validate that peek operations return correct data:

import requests

api_base = "http://localhost:7778/api/oculus"

# Test peek for section content
r = requests.get(f"{api_base}/peek/node-slug", params={"path": "section-name"})
assert r.status_code == 200
assert r.json().get("value") is not None

# Test peek for fence data
r = requests.get(f"{api_base}/peek/node-slug", params={"path": "section.yaml"})
data = r.json().get("value")
assert isinstance(data, dict)

Testing Poke Operations

Validate that poke operations write correctly:

import requests

api_base = "http://localhost:7778/api/oculus"

# Test poke to YAML fence
payload = {
    "path": "config.yaml",
    "value": {"key": "value"},
    "operation": "set"
}
r = requests.post(f"{api_base}/poke/node-slug", json=payload)
assert r.status_code == 200

# Verify write succeeded
r = requests.get(f"{api_base}/peek/node-slug", params={"path": "config.yaml"})
assert r.json().get("value", {}).get("key") == "value"

Testing Tree Operations

Validate hierarchical structure:

import requests

api_base = "http://localhost:7778/api/oculus"

r = requests.get(f"{api_base}/tree/node-slug")
tree = r.json()

# Validate structure
assert "root" in tree
assert "children" in tree["root"]
assert tree["root"]["level"] == 0

Testing Virtual Fences

Test virtual fence execution:

import requests

api_base = "http://localhost:7778/api/oculus"

# Execute virtual fence action
payload = {
    "action": "add",
    "params": {"content": "Test item"},
    "context": "Testing virtual fence"
}
r = requests.post(f"{api_base}/virtual-fences/node-slug/fence-index/execute", json=payload)
assert r.status_code == 200

Real-World Examples

Example 1: Hierarchical Addressing Tests

See hierarchical-addressing-test-suite node for complete example.

Tests peek/poke operations with hierarchical paths:

  • Peek H1 sections
  • Peek H2 nested sections
  • Tree structure validation
  • Error handling for nonexistent paths

Consumed in: oculus-cli-unified-addressing-guide with :table middleware

Example 2: Fortune Graphnode

See fortune node for graphnode structure example.

Demonstrates:

  • Config section for parameters
  • Fetch section with executable Python
  • Template section for rendering
  • Multiple data sources (HTTP, local files, API)

Testing Best Practices

1. Use no_cache: true for Test Nodes

Always set no_cache: true in test node metadata to ensure tests run fresh every time:

metadata:
  type: graphnode
  no_cache: true

2. Structure Test Results

Use consistent structured format for results:

  • tests: Array of test objects
  • summary: Aggregate statistics
  • status: Visual indicators (✅❌)

This enables table middleware and makes results scannable.

3. Test at Multiple Levels

  • Unit: Individual fence execution
  • Integration: Peek/poke through API
  • System: Full graphnode composition with middleware

4. Document Test Purpose

Each test should have clear:

  • test: Human-readable test name
  • operation: What's being tested
  • path: What data is being accessed
  • result: Expected outcome

5. Handle Errors Gracefully

Wrap test execution in try/except:

try:
    r = requests.get(url, timeout=2)
    status = "Pass" if r.ok else "Fail"
except Exception as e:
    status = "Error"
    result = str(e)

6. Use Middleware for Formatting

Don't format test results in Python - use middleware:

# In your test node, set result variable
result = {"tests": [...], "summary": {...}}

# Consume with middleware
```graphnode:test-suite:table
TableConfig:
  array_path: tests
  format: markdown

### 7. Keep Tests Close to Features

Embed tests in feature documentation or link them nearby. Tests should validate the feature they describe.

## Middleware Options

### :table Middleware

Formats array data as markdown table:

```markdown
```graphnode:test-suite:table
TableConfig:
  array_path: tests
  columns:
    Column Header: field_name
    Another Header: other_field
  format: markdown

### :markdown Middleware

Renders using node's template section:

```markdown
```graphnode:test-suite:markdown

### Raw (no middleware)

Returns raw JSON data:

```markdown
```graphnode:test-suite

## Config Injection

Test graphnodes have access to config via the `config` variable:

```python
api_base = config.get('api_base', 'http://localhost:7778/api/oculus')
timeout = config.get('timeout', 2)

How it works: Fence executor injects config from ## config section using INTERPOLATED cache level (variables + includes, no fence execution) to avoid recursion.

Testing Tools

API Testing

Use requests library for HTTP API testing:

import requests

r = requests.get(url, params={}, timeout=2)
r = requests.post(url, json={}, timeout=2)

Assertion Patterns

# Status code
assert r.status_code == 200
assert r.ok

# Response data
data = r.json()
assert data.get("value") is not None
assert isinstance(data["value"], dict)

# Error cases
assert r.status_code in [404, 400, 500]

Result Collection

tests = []
for test_name, test_fn in test_cases:
    try:
        test_fn()
        tests.append({"test": test_name, "status": "✅", "result": "Pass"})
    except AssertionError:
        tests.append({"test": test_name, "status": "❌", "result": "Fail"})
    except Exception as e:
        tests.append({"test": test_name, "status": "❌", "result": f"Error: {e}"})

Debugging Tests

Check Test Execution

View test node directly to see results:

oculus goto test-suite-name
oculus look

Check Raw Fetch Results

Peek the fetch section to see raw Python execution:

oculus peek test-suite-name fetch

Check Config Injection

Verify config is being injected:

oculus peek test-suite-name config

Check Middleware Processing

Compare raw vs formatted output:

# Raw results
oculus peek test-suite-name

# Table formatted
# (view in consuming document)

Common Issues

Issue: Config Not Defined

Cause: Config injection failing

Solution: Check that ## config section exists with valid YAML

Issue: Tests Not Running

Cause: Caching or fence not marked executable

Solutions:

  • Add no_cache: true to metadata
  • Ensure fence has [execute=true] attribute
  • Restart oculus-api to clear caches

Issue: Table Formatting Not Working

Cause: Middleware config or data structure mismatch

Solutions:

  • Verify TableConfig.array_path points to correct field
  • Ensure columns map to actual fields in test objects
  • Check that result is dict with array at specified path

Issue: Infinite Recursion

Cause: Config injection triggering fence execution recursively

Solution: Fence executor should use INTERPOLATED cache level (already implemented in fence_executor.py:260-302)

Migration from Traditional Tests

From Python Unit Tests

Before:

# tests/test_feature.py
def test_peek_operation():
    result = peek("node", "path")
    assert result is not None

After:

# feature-tests graphnode

```python[execute=true]
import requests
r = requests.get(...)
test_results.append({
    "test": "Peek Operation",
    "status": "✅" if r.ok else "❌"
})
result = {"tests": test_results}

Then consume in feature documentation: graphnode:feature-tests:table


### From Manual Testing

**Before**: Manually run commands and check output

**After**: Create graphnode with automated tests that run when you read the docs

## Future Enhancements

- **Test discovery**: Automatic indexing of all test nodes
- **Test runner**: Execute all tests with single command
- **CI integration**: Run tests in CI pipeline
- **Coverage tracking**: Track which features have embedded tests
- **Performance benchmarks**: Track test execution time

## Summary

**Key Principles**:
1. Tests as living documentation
2. Graphnode composition for reusability
3. Structured data format for table rendering
4. Middleware for formatting separation
5. Config injection for parameterization

**Workflow**:
1. Create standalone test graphnode
2. Write tests in `## fetch` section
3. Output structured result data
4. Consume with `:table` or `:markdown` middleware
5. Embed in feature documentation

**Benefits**:
- Documentation always validated
- Tests visible when reading features
- Reusable across graph
- Pattern-based learning
- Third-person observation

## North

```yaml
slots:
- oculus
- oculus

South

slots: []

East

slots: []

West

slots: []
↑ northoculus