villa-villekulla-manual
Villa Villekulla User Manual
A pipeline composition environment for people who want to understand what they're building.
What This Is
Villa Villekulla is a visual environment for composing and running pipelines. A pipeline is a sequence of operations that transform data. You define the operations, connect them, run them, and inspect what happened.
This manual assumes you are an intelligent person who wants to understand how things work. It will show you what to do, explain why it works that way, and give you the tools to figure out the rest yourself.
Core Concepts
Before you touch anything, understand how Villa Villekulla thinks about information.
Three Levels of Meaning
Everything flowing through a pipeline exists at one of three levels:
| Level | What It Is | Example |
|---|---|---|
| Data | Structured values, resolved and ready | { customer: "Acme", total: 150 } |
| Presentation | Formatted text for humans | "Order confirmed for Acme: $150" |
| Code | Unresolved—a promise to compute later | ${customer.name} or a fence reference |
Data is what you work with most. It's the structured information flowing step to step—numbers, strings, lists, records. When a fence "produces data," it outputs structured values the next fence can read.
Presentation is the human-readable end product. Reports, notifications, formatted messages. When a pipeline ends with presentation, it's meant for human eyes, not further processing.
Code is data that hasn't been resolved yet. A template like ${order.total} is code—it becomes data when the pipeline evaluates it. Fences themselves are code: instructions waiting to execute. Data and code are mirror images. One is resolved; the other is a future.
Most of the time, you're working with data flowing through fences, ending in presentation. But the system treats code as just another thing that can flow—useful for advanced patterns where you want to defer execution.
Building Blocks
Villa Villekulla has different types of building blocks:
Blocks (what we call "fences") are units of work. Each block:
- Has a specific job (validate, transform, send, check)
- Declares what it needs (parameters, input type)
- Produces output for the next block
Think of blocks as LEGO bricks. You didn't manufacture them. You're composing them.
Connectors (arrows) are special blocks that route data. They don't transform anything—they just direct flow. Need to turn a corner? Place a connector. Need to split? Use a gate.
Gates are blocks that make decisions. Data comes in one side, goes out perpendicular—up for yes, down for no.
Landing Pads are blocks that catch errors. They're not in the normal flow—they activate when something goes wrong.
All of these are technically the same thing: blocks with different behaviors. But thinking of them as distinct types helps you build.
Type Compatibility
Blocks declare what they consume and produce:
# A transform block
consumes: data # Needs structured input
produces: data # Outputs structured data
# A formatter block
consumes: data # Needs structured input
produces: presentation # Outputs human-readable text
# A template block
consumes: data # Needs structured input
produces: code # Outputs unresolved expressionsWhen you connect blocks, their types must be compatible. A block that needs data can't follow one that produces presentation. The canvas enforces this—you can't place incompatible blocks.
This is why "if you can build it, it will run." The type system catches mismatches at design time.
The Interface
Villa Villekulla has four areas:
┌─────────────────────────────────────────────────────┐
│ HEADER: Pipeline selector, status, controls │
├─────────────────────────────┬───────────────────────┤
│ │ │
│ CANVAS │ PALETTE │
│ Where you build │ What you build with │
│ │ │
├─────────────────────────────┴───────────────────────┤
│ TRACE PANEL: What happened when you ran it │
└─────────────────────────────────────────────────────┘Canvas
The canvas is a 16×16 grid. Each cell can hold one fence or an arrow connector.
Placing Fences
- Click a fence in the palette, then click a cell to place it
- Fences display a 2-letter identifier (e.g.,
LKfor lookup,FTfor filter) - You can assign custom emojis to fences in the pipeline configuration
Connections Are Fences Too Connections between fences aren't drawn lines—they're arrow fences. Drop an arrow between two fences to connect them. Arrows are identity transforms: they pass context through unchanged.
┌────┬────┬────┬────┬────┬────┬────┐
│ 📥 │ → │ 🔍 │ → │ ✓ │ → │ 📤 │
│src │ │look│ │gate│ │sink│
└────┴────┴────┴────┴────┴────┴────┘Flow Direction Rules
- Regular fences: Input one side → output opposite side (180°)
- Arrow fences: Only way to change direction. Need to turn? Place an arrow.
- Gates/Splitters: Input one side → output perpendicular. Up = true, down = false.
Type-Safe Placement Fences declare what they consume and produce. You can't place a fence where it doesn't fit:
- If upstream produces
dictand your fence needstokens, the drop is blocked - Valid drop zones highlight green, invalid zones grey out
- If you can build it, it will run
Property Editor Click any fence to see its properties panel:
- Shows all declared parameters with appropriate editors (text, number, dropdown, toggle)
- Static values show as normal inputs
- Dynamic values (from context) show the
${path.to.value}expression with a 🔗 indicator - YAML editor with syntax highlighting for complex configuration
Palette
The palette shows available blocks, organized by category:
- Sources: Load initial data
- Transforms: Modify data
- Gates: Conditional branching
- Connectors: Direction changes
- Sinks: Output or store results
- Landing Pads: Error handlers
Browsing Blocks
Click a block in the palette to see its details:
Description: What this block does, in plain language.
Parameters: What it accepts, with types and defaults.
┌─────────────────────────────────────────┐
│ 🔍 lookup-api │
├─────────────────────────────────────────┤
│ Fetches data from an external API and │
│ merges the response into context. │
│ │
│ PARAMETERS │
│ ─────────────────────────────────────── │
│ url string required │
│ timeout integer default: 30 │
│ headers object optional │
│ retry boolean default: true │
│ │
│ CONSUMES: data │
│ PRODUCES: data │
├─────────────────────────────────────────┤
│ [View Full Docs] [Select to Place]│
└─────────────────────────────────────────┘This is the same information as pippi describe <fence>.
Placing Blocks
Click Select to Place, then click a cell on the canvas. The block appears with default settings.
Customizing After Placement
Once placed, click the block to open the property editor. There you can:
- Set parameter values: Configure the block's behavior
- Override the label: Change the display name (e.g., "Validate" instead of "validate-order")
- Override the emoji: Pick a custom icon for this instance
- Bind to context: Wire parameters to
${context.path}values
Customization is per-instance. The same block type placed twice can have different labels, emojis, and parameter values.
Trace Panel
After you run a pipeline, the trace panel shows what happened. It displays as a vertical timeline with accordion steps—click any step to expand details.
Context Viewer
For large or complex contexts, click the 📋 icon on any step to open the full context viewer:
▶ Step 0: load-players 2ms STORED [📋]
▶ Step 1: pick-random 1ms HIT [📋]
▶ Step 2: enrich-customer 45ms MISS [📋]The context viewer provides:
- Tree view: Expandable nodes for nested objects and arrays
- Search: Find keys or values in large contexts
- Path display: Shows
order.items[3].skuas you navigate - Copy path: Click any node to copy its path for use in
${context.path}bindings
Structured Diff Viewer
When comparing contexts (step-to-step or run-to-run), the diff viewer shows changes visually:
┌─────────────────────────┬─────────────────────────┐
│ Step 1 │ Step 2 │
├─────────────────────────┼─────────────────────────┤
│ order: │ order: │
│ id: "ORD-123" │ id: "ORD-123" │
│ total: null │ total: 150.00 ← │
│ │ pricing: + │
│ │ subtotal: 138.89 │
│ │ tax: 11.11 │
└─────────────────────────┴─────────────────────────┘- Green (+): Added keys
- Red (-): Removed keys
- Yellow (←): Changed values
- Collapsed: Unchanged sections collapse to reduce noise
Toggle between side-by-side and unified diff views. The system already tracks hashes and deltas—this just visualizes what's captured.
▶ Step 0: load-players 2ms STORED
▶ Step 1: pick-random 1ms HIT
▼ Step 2: format-message 3ms MISS
│ Context at this step:
│ players: ["Alice", "Bob", "Carol"]
│ selected: "Bob"
│ message: "Hello, Bob!"
│
│ Diff from previous:
│ + message: "Hello, Bob!"
▶ Step 3: send-notification 45ms STOREDCache Status
- STORED: First run, result was cached
- HIT: Result came from cache (fast)
- MISS: Ran fresh, wasn't cacheable
Step Details Click any step to see:
- Context snapshot at that point
- Diff from previous step
- Diff to next step
- Cache key and TTL
Run History Panel Below the trace, the run history shows all previous runs for this pipeline:
- Click any run to load its trace
- See parameters, status, duration for each
- Compare runs side-by-side
Cross-Run Comparison Select two runs to compare them:
- Same step across different runs
- Point-to-point: step 2 in run A vs step 5 in run B
- Coordinates:
(pipeline, run, step)are all addressable
Creating Your First Pipeline
Let's build something. We'll create a pipeline that picks a random name and says hello.
Step 1: Start a New Pipeline
Click New Pipeline in the header. You'll see an empty grid with a source cell on the left and a sink cell on the right.
Step 2: Configure the Source
The source is where your data comes from. Click the source cell, then click Configure in the palette.
Set the initial context:
names:
- Alice
- Bob
- Carol
- Dave
greeting: "Hello"This is the data your pipeline will work with.
Step 3: Add a Transform
From the palette, select pick-random from the Transforms category. Click an empty cell to place it.
Now connect the source to this fence: click the source cell, drag to the pick-random cell. An arrow appears showing the flow.
Configure pick-random:
input_path: names
output_path: selected_nameThis tells the fence: read from names, write the result to selected_name.
Step 4: Add Another Transform
Place format-text and connect it after pick-random.
Configure it:
template: "${greeting}, ${selected_name}!"
output_path: messageThe ${} syntax pulls values from context. After this runs, context will have a message field.
Step 5: Connect to the Sink
Draw a connection from format-text to the sink cell. The sink collects your final output.
Configure the sink to output the message field.
Step 6: Run It
Click Run in the header.
The trace panel lights up:
Step 0: source 1ms context loaded
Step 1: pick-random 2ms selected_name: "Carol"
Step 2: format-text 1ms message: "Hello, Carol!"
Step 3: sink 0ms output capturedYour pipeline works. Run it again—you might get a different name.
Step 7: Save It
Click Save. Give it a name: hello-random.
Your pipeline is now saved and can be run from the CLI:
pippi run hello-randomInspecting Runs
You built it, it runs. Now something goes wrong. Here's how you figure out what.
Loading a Past Run
Every run creates a session. View recent sessions:
pippi runs hello-random# Fence Created Duration Session ID
0 hello-random 14:23 12ms session-abc123
1 hello-random 14:21 8ms session-def456
2 hello-random 14:15 245ms session-ghi789Load one in the IDE: File → Load Session → paste session ID.
Or from CLI:
pippi show session-abc123Stepping Through
With a session loaded, the trace panel shows every step. Click any step to see:
- Context before: What data existed when this step started
- Context after: What data existed when this step finished
- Delta: What changed (added, modified, removed)
Use the playback controls to step forward and back:
[⏮️] [◀️] Step 2/4 [▶️] [⏭️]Comparing Two Runs
Something worked yesterday, broken today. Compare them:
- Load the working session
- Click Compare in the header
- Load the broken session
Side-by-side view shows both runs. Steps are aligned. Differences are highlighted:
WORKING BROKEN
─────────────────────────────────────────
Step 1: pick-random Step 1: pick-random
selected: "Carol" selected: null ← DIFFNow you know where it diverged.
Point-to-Point Diff
Within a single run, you can diff any two points:
- Click step 1 (marks it as "A")
- Click step 4 (marks it as "B")
- Diff panel shows accumulated changes from A to B
Useful for long pipelines where you want to see "what changed between here and there" without clicking through every step.
Error Handling
Pipelines fail. Here's how that works and how you handle it.
Standard Error Format All fence errors follow the same structure:
When a Fence Fails
If a fence throws an error, the pipeline stops at that step. The trace panel shows:
- Which fence failed
- The error type and message
- Context at the moment of failure
Retry Patterns
Some fences support retry configuration:
fence: flaky-api
retry:
attempts: 3
backoff: exponential
initial_delay: 1sRetries happen automatically. The trace shows each attempt.
Landing Pads
Landing pads are special fences that catch errors. They're not in the normal flow—they activate when errors match their criteria.
┌────┬────┬────┬────┬────┐
│ 📥 │ → │ 🔍 │ → │ 📤 │ ← normal flow
├────┼────┼────┼────┼────┤
│ │ │ │ │ │
├────┼────┼────┼────┼────┤
│ 🪂 │ → │ 📝 │ → │ ⚠️ │ ← landing pad flow
│pad │ │log │ │err │
└────┴────┴────┴────┴────┘When 🔍 fails, flow teleports to 🪂 landing pad.
Matching Precedence (most specific wins):
- type + fence + params
- type + fence
- type only
- catch-all (no criteria)
landing-pads:
- name: payment-errors
catches:
type: PaymentError
flow: [log-error, notify-finance]
- name: catch-all
flow: [log-error, send-alert]Compensation
For operations that need cleanup on failure, compensation runs BEFORE the landing pad:
- Failure occurs
- Compensation stack unwinds (reverse order)
- Then jump to landing pad
chain:
- fence: reserve-inventory
compensate: release-inventory
- fence: charge-payment
compensate: refund-payment
- fence: send-confirmationIf charge-payment fails:
reserve-inventory's compensate runs (release the reservation)- Then flow jumps to the matching landing pad
Compensation is the undo. Landing pad is where you end up after cleanup.
error:
type: "ValidationError"
message: "Invalid input format"
fence: "lookup-api"
params:
url: "https://api.example.com"
input: {...}This makes errors parseable and actionable.
Approval Gates
Sometimes a pipeline should pause and wait for human approval.
Adding a Gate
Place an approval-gate fence in your pipeline. Configure it with three parts:
fence: approval-gate
prompt: "Large order requires approval"
body: |
Customer ${customer.name} placed an order
for ${orders.count} items totaling ${orders.total}.
Please review and approve.
show_context: true
approvers:
- finance-team
- managers
timeout: 24h
on_timeout: rejectMessage Structure
- prompt: Short subject line (like email subject, notification title)
- body: Rich content with
${context.vars}templating - show_context: Append a table of all context values
The same approval renders appropriately for each channel—Teams card, Slack message, email, or in-app modal.
When the pipeline reaches this fence, it pauses. The context at that moment is preserved. The pipeline won't continue until someone approves.
Approving
Pending approvals appear in:
- IDE: Approvals panel shows waiting gates
- CLI:
pippi approvalslists pending gates
pippi approvals
PENDING APPROVALS
─────────────────────────────────────────────────
Pipeline Session Gate Waiting
hello-report session-xyz123 approval-gate-1 2h 15m
expense-flow session-abc456 manager-review 45mTo approve:
pippi approve session-xyz123Or in the IDE, click the approval, review the context, click Approve.
The pipeline resumes from where it paused.
Rejecting
pippi reject session-xyz123 --reason "Numbers don't match"The pipeline fails at the gate. The reason is recorded in the session.
Troubleshooting
"My pipeline runs but produces wrong output"
- Load the session:
pippi show <session-id> - Step through: find the first step where output doesn't match expectation
- Check the delta: what did that fence actually do to context?
- Check the fence config: is it reading from / writing to the right paths?
Most bugs are path typos: user_name vs username, data.items vs items.
"My pipeline is slow"
- Run with trace:
pippi run <pipeline> -v - Look at step timings:
Step 0: source 2ms
Step 1: transform 1ms
Step 2: call-external 892ms ← this one
Step 3: format 1ms- The slow step is obvious. Is it:- Cacheable? Add cache hints
- Parallelizable? Can it run alongside other steps?
- Necessary? Maybe you don't need it
"My pipeline worked yesterday, fails today"
- Find yesterday's working session:
pippi runs <pipeline> --limit 50 - Load both sessions in compare mode
- Look for:- Different input data (source changed?)
- External dependency failure (API down?)
- Configuration change (someone modified the fence?)
"I don't understand what a fence does"
pippi describe <fence-label>Shows:
- Description: What the fence does
- Parameters: What it accepts (name, type, required, default)
- Examples: How to use it
- Notes: Any gotchas or tips
The fence's markdown node IS its documentation. pippi describe just extracts and formats the human-readable parts.
In the UI: Click a fence in the palette → its description appears in the detail panel. Click Full Docs to see the complete documentation.
View the actual code:
pippi show <fence-label>"The cache is giving me stale data"
Cache keys include: fence ID, input parameters, and source data hash.
From the UI:
- Open the run in the trace panel
- Click the cached step
- Click Clear Cache for that fence
Clear all caches for a pipeline: Click the pipeline menu → Cache Management → Clear All
From CLI:
pippi cache clear <fence-label>
pippi cache clear --pipeline <pipeline>
pippi cache clear --all # Nuclear optionInspect what's cached:
pippi cache inspect <fence-label> # Shows key, TTL, sizeThe UI shows cache status for every fence in a run. You can invalidate specific fences, the whole pipeline, or everything.
"I need to re-run from a specific step"
Load the session, click the step, select Re-run from here.
What happens:
- A new session is created (never reuses the old one)
- Context is copied from the step before your selected step
- You can modify context values before running
- Execution starts from your selected step
Use cases:
- Debug a failure: "What if I fix this value and retry?"
- What-if analysis: "What happens with different input?"
- Skip expensive steps: Don't re-run the slow parts
From CLI:
pippi replay <session-id> --from-step 3
pippi replay <session-id> --from-step 3 --override '{"count": 50}'The replay creates a new session with the original context plus your overrides, starting at the specified step.
"Something is wrong but I don't know what"
Start broad, narrow down:
pippi runs- did it run at all?pippi show <session>- did it complete? error? where?pippi edges <session>- what changed at each step?- Load in IDE, step through visually
The answer is in the session. Sessions record everything.
Keyboard Shortcuts
Session Browser
A separate administrative interface for viewing all sessions across the system.
Accessing the Browser
- Web: Navigate to
/admin/sessions - CLI:
pippi sessions --all
Session List
The browser shows all sessions with:
- Session ID
- Pipeline name
- Status (running, pending, done, failed)
- Started time
- Current step (if running)
Filter by status, time range, or pipeline name.
Session Detail
Click any session to see:
- Full context at current step
- Trace of steps completed
- Duration and timing
- Error details (if failed)
- Approval status (if pending)
Actions
From the session browser you can:
- View Trace: See step-by-step execution
- Open Pipeline: Jump to the pipeline definition
- Cancel: Stop a running session
- Force Continue: Push past a stuck approval (admin)
CLI Equivalents
# List all sessions
pippi sessions --all
# Filter by status
pippi sessions --status running
pippi sessions --status pending
# Filter by pipeline
pippi sessions --pipeline order-processor
# Show session detail
pippi sessions show <session-id>
# Cancel a session
pippi sessions cancel <session-id>This is a debugging tool—use it to see what's happening across the system without loading individual pipelines.
| Key | Action |
|---|---|
R |
Run pipeline |
S |
Save pipeline |
Space |
Play/pause trace playback |
← → |
Step backward/forward in trace |
Delete |
Remove selected fence |
Escape |
Deselect / cancel operation |
Cmd+Z |
Undo |
Cmd+Shift+Z |
Redo |
/ |
Open command palette |
? |
Show this help |
What's Next
You now know enough to:
- Build pipelines that transform data
- Handle errors with landing pads and compensation
- Add human checkpoints with approval gates
- Debug failures with trace inspection and replay
- Manage caching and re-run from any step
The sections below describe the full Villa Villekulla platform. Some capabilities are available now; others are marked 🚢 Taka-Tuka Land—we know exactly where they are, the ship is being provisioned.
The Fence Marketplace
🚢 Taka-Tuka Land
The marketplace is where you discover, publish, and manage fences.
Browsing Fences
Open Marketplace from the header. You'll see:
- Categories: Sources, Transforms, Gates, Sinks, Connectors
- Popular: Most-used fences across the platform
- Recent: Newly published or updated
- Your Organization: Fences your team has published
Each fence card shows:
- Name and icon
- Publisher and version
- Usage count and rating
- One-line description
Click a fence to see full documentation, parameters, examples, and reviews.
Installing Fences
Fences from the marketplace need to be installed before use:
pippi install slack-connector@2
pippi install --org salesforce-connector@latestInstalled fences appear in your palette. Version pinning ensures your pipelines don't break when connectors update.
Publishing Fences
To share a fence with others:
- Create a fence node following the standard format
- Add required metadata: description, parameters, examples
- Run
pippi publish <fence-label> - Choose visibility: private, organization, or public
Published fences are versioned automatically. Subscribers are notified of updates.
Versioning
Fences follow semantic versioning:
- Major: Breaking changes to parameters or output
- Minor: New features, backward compatible
- Patch: Bug fixes
Pipelines pin to major versions by default. You control when to upgrade.
Event Integration
🚢 Taka-Tuka Land
Pipelines can be triggered by events, not just manual runs.
Event Sources
Subscribe to events from external systems:
triggers:
- type: webhook
path: /orders/new
- type: schedule
cron: "0 9 * * MON"
- type: event
source: github
event: issue.labeled
filter:
label: "needs-review"When an event matches, a new pipeline run starts with the event payload as initial context.
Event Sinks
Emit events to other systems:
fence: emit-event
target: slack
channel: "#alerts"
event_type: order.completed
payload:
order_id: ${order.id}
customer: ${customer.name}Event sinks are fences like any other—place them in your flow where you want events emitted.
Event Routing
For complex event-driven architectures:
router:
- match:
type: order.*
value_gt: 10000
pipeline: high-value-order-flow
- match:
type: order.*
pipeline: standard-order-flow
- match:
type: support.*
pipeline: support-ticket-flowEvents are routed to pipelines based on matching rules. Unmatched events go to a dead-letter queue for inspection.
Environments & Promotion
🚢 Taka-Tuka Land
Pipelines move through environments before reaching production.
Environment Scoping
Every pipeline exists in an environment:
- Development: Your personal sandbox
- Staging: Shared testing environment
- Production: Live, affects real data
The environment selector in the header shows which you're viewing. Color-coded: green (dev), yellow (staging), red (production).
Promotion Flow
To move a pipeline from dev to production:
- Develop in your dev environment
- Test by running with sample data
- Request promotion to staging
- Review by a team member (required for staging → prod)
- Approve and the pipeline is copied to the target environment
pippi promote order-processor --to staging
pippi promote order-processor --to production --reviewer @managerEnvironment Variables
Pipelines use environment-specific configuration:
# In dev
api_url: https://sandbox.api.example.com
notify_channel: "#dev-alerts"
# In production
api_url: https://api.example.com
notify_channel: "#alerts"The same pipeline definition, different values per environment.
Rollback
If a production pipeline breaks:
pippi rollback order-processor --to-version 3Or from the UI: Pipeline menu → History → click any version → Restore.
Governance & Audit
🚢 Taka-Tuka Land
Control who can do what, and track everything.
Roles & Permissions
| Role | Can Do |
|---|---|
| Viewer | See pipelines, view runs |
| Builder | Create/edit pipelines in dev |
| Deployer | Promote to staging/production |
| Admin | Manage users, policies, all environments |
Assign roles per user or per team. Pipelines can require specific roles for approval gates.
Data Policies
Tag data with classifications:
context:
customer_email:
value: "user@example"
classification: PII
order_total:
value: 1500
classification: financialPolicies control what can happen to classified data:
- PII cannot be logged in plain text
- Financial data requires approval to export
- Certain fences are blocked from accessing classified fields
Audit Trail
Every action is logged:
2026-01-26 10:30:15 | alice | pipeline.created | order-processor
2026-01-26 10:45:22 | alice | pipeline.run | order-processor | session-abc
2026-01-26 11:00:00 | bob | pipeline.promoted | order-processor | dev→staging
2026-01-26 11:15:30 | carol | approval.granted | session-xyz | gate-1Query the audit log:
pippi audit --pipeline order-processor --since 7d
pippi audit --user alice --action "*.promoted"Templates & Wizards
🚢 Taka-Tuka Land
Start from patterns instead of blank canvases.
Template Gallery
Open New Pipeline → From Template to browse:
- When GitHub issue labeled, notify Slack
- Daily report from database to email
- Order received → validate → fulfill → confirm
- Support ticket → route → assign → notify
Each template includes:
- Pre-built pipeline structure
- Placeholder configuration (fill in your values)
- Documentation explaining the pattern
Wizards
For common integrations, wizards walk you through setup:
- Connect to Slack: Authorize, pick channels
- Connect to GitHub: Select repos, choose events
- Connect to Database: Enter credentials, test connection
The wizard creates the necessary source/sink fences configured for your accounts.
Customizing Templates
Templates are starting points. After creating from a template:
- Add or remove fences
- Adjust parameters
- Add error handling
- Rename and save as your own
Your customized pipeline is fully independent of the template.
Plain Language Views
🚢 Taka-Tuka Land
See your pipeline as a story, not just a grid.
Pipeline Summary
Every pipeline has an auto-generated summary:
Order Processor
When triggered, this pipeline:
- Loads the order from the source
- Validates required fields (customer, items, date)
- Checks inventory at the main warehouse and reserves items
- Calculates pricing with 8% tax and quantity discounts
- Sends confirmation email to the customer
If inventory check fails, it logs the error and notifies sales.
Large orders (>$5000) require manager approval before step 3.
Click View Summary in any pipeline to see this narrative.
Run Explanation
After a run completes, click Explain to see what happened in plain language:
Run #47 completed successfully
Started with order ORD-12345 from Acme Corp.
Validation passed. Inventory reserved (RES-98765). Applied 5% discount for quantity ≥10. Final total: $487.50. Confirmation sent to orders@acme
For failed runs:
Run #48 failed at inventory check
Order ORD-99999 requested 1,000,000 units of WIDGET-A.
Warehouse reported: insufficient stock (available: 5,432). No compensation needed (nothing reserved yet). Error logged. Sales team notified via #sales-alerts.
Diff Explanation
When comparing runs, get a narrative diff:
Difference between Run #47 and Run #48
Both runs validated successfully.
Run #47 had 10 items; Run #48 requested 1,000,000. Run #47 passed inventory; Run #48 failed (stock insufficient). Run #47 continued to completion; Run #48 triggered error flow.
Simple Mode
🚢 Taka-Tuka Land
A streamlined interface for straightforward automations.
Activating Simple Mode
Toggle Simple Mode in the header. The interface changes:
- Linear flow only: No grid, just a sequence of steps
- Hidden complexity: No landing pads, no compensation visible
- Guided configuration: Wizards instead of YAML editors
- Pre-built error handling: Sensible defaults, hidden from view
Building in Simple Mode
- Choose a trigger: Manual, schedule, or webhook
- Add steps: Pick from a curated list of common fences
- Configure each step: Fill in forms, not YAML
- Test it: Run with sample data
- Activate: Turn it on
Simple Mode pipelines are real pipelines—you can switch to Advanced Mode anytime to see (and edit) the full structure.
When to Use Simple Mode
- Quick "if this, then that" automations
- Users who don't need error handling complexity
- Prototyping before adding robustness
Graduating to Advanced Mode
When your automation needs landing pads, compensation, or complex branching:
- Open the pipeline
- Click Switch to Advanced Mode
- The full grid appears with your simple flow in place
- Add error handling, gates, and branches as needed
You can always switch back to Simple Mode view, but advanced features will be hidden (not deleted).
Scale & Reliability
🚢 Taka-Tuka Land
Controls for running pipelines at scale.
Concurrency Controls
Set limits per pipeline:
limits:
max_concurrent_runs: 10
max_queue_depth: 100
queue_timeout: 5m- max_concurrent_runs: How many instances can run simultaneously
- max_queue_depth: How many can wait in queue
- queue_timeout: How long a queued run waits before being dropped
Rate Limiting
For pipelines that call external APIs:
rate_limits:
- fence: external-api
requests_per_minute: 60
burst: 10The system automatically throttles to stay within limits. Excess requests queue and retry.
Backpressure
When downstream systems slow down:
backpressure:
strategy: slow_down # or: drop, queue
threshold: 80% # when queue is 80% full
action: reduce_rate_by: 50%The pipeline automatically adjusts throughput based on downstream capacity.
Health Dashboard
The Operations view shows:
- Active runs across all pipelines
- Queue depths and wait times
- Rate limit utilization
- Error rates by pipeline and fence
Set alerts for thresholds:
pippi alert create --pipeline order-processor --metric error_rate --threshold 5%Cost & Performance
🚢 Taka-Tuka Land
Understand where time and resources go.
Pipeline Analytics
Each pipeline shows aggregated metrics:
| Metric | Last 7 Days |
|---|---|
| Total runs | 1,247 |
| Success rate | 98.2% |
| Avg duration | 2.3s |
| P95 duration | 8.1s |
| Total compute | 4.2 hours |
Fence Hot Spots
See which fences consume the most time:
┌─────────────────────┬──────────┬─────────┐
│ Fence │ Avg Time │ % Total │
├─────────────────────┼──────────┼─────────┤
│ external-api-call │ 1.8s │ 72% │
│ database-query │ 0.4s │ 16% │
│ format-response │ 0.1s │ 4% │
│ validate-input │ 0.05s │ 2% │
└─────────────────────┴──────────┴─────────┘Click any fence to see its performance over time, cache hit rates, and optimization suggestions.
Cost Attribution
If your platform has compute costs:
Pipeline: order-processor
Monthly cost: $47.20
└─ external-api-call: $34.00 (72%)
└─ database-query: $8.50 (18%)
└─ other: $4.70 (10%)Set budgets and alerts:
pippi budget set order-processor --monthly 100Team Workflows
🚢 Taka-Tuka Land
Collaborate on pipeline development.
Change Review
When you modify a pipeline:
- Make changes in your dev environment
- Click Request Review
- Reviewers see a diff of what changed
- Comments and suggestions inline
- Approve or request changes
- Merge to promote
Definition Diff
Compare any two versions of a pipeline definition:
chain:
- fence: validate-order
+ - fence: fraud-check # Added
+ threshold: 0.8
- fence: check-inventory
- warehouse: "main" # Changed
+ warehouse: "regional"
- fence: calculate-pricingClick History → select two versions → Compare.
Shared Components
Create reusable sub-pipelines:
# Define once
component: standard-error-handling
steps:
- fence: log-error
- fence: notify-ops
- fence: update-status
# Use anywhere
chain:
- fence: risky-operation
on_error: standard-error-handlingChanges to the component propagate to all pipelines using it.
Team Ownership
Assign pipelines to teams:
ownership:
team: platform-eng
oncall: "@platform-oncall"
escalation: "@platform-leads"Alerts and approval requests route to the right people.
Task Modes
🚢 Taka-Tuka Land
Preset UI configurations for specific tasks.
Available Modes
| Mode | What It Shows |
|---|---|
| Build | Canvas, palette, property editor |
| Debug | Trace panel, context inspector, diff view |
| Monitor | Run history, metrics, alerts |
| Review | Definition diff, comments, approval controls |
Switch modes from the header or with keyboard shortcuts.
Debug Mode Deep Dive
When you enter Debug mode:
- Trace panel expands to full width
- Context inspector shows current step's full state
- Diff view highlights changes
- Irrelevant panels collapse
Everything you need to answer "why did this fail?" in one view.
Custom Modes
Create your own mode presets:
- Arrange panels how you like
- Click Save Layout → New Mode
- Name it ("My Investigation Mode")
- Access from the mode menu
Notifications
🚢 Taka-Tuka Land
Structured, attention-friendly alerting.
Notification Channels
Configure where notifications go:
notifications:
email:
- ops@company
slack:
- channel: "#pipeline-alerts"
filter: errors_only
pagerduty:
- service: platform-critical
filter: severity >= highBatching & Digests
Avoid alert fatigue:
batching:
window: 5m
max_batch: 10
digest_after: 3 # After 3 similar alerts, send digest insteadInstead of 50 separate "pipeline failed" emails, you get one digest: "order-processor failed 50 times in the last hour."
Approval Surfaces
Pending approvals aggregate in one place:
- In-app: Approvals panel shows all pending across pipelines
- Email digest: Daily summary of pending approvals
- Slack: One message per approval, with approve/reject buttons
Each approval shows context, who requested, how long it's been waiting.
Alert Routing
Route different severities to different places:
routing:
- severity: critical
channels: [pagerduty, slack-urgent]
- severity: high
channels: [slack, email]
- severity: low
channels: [email-digest]Low-severity issues batch into daily digests. Critical issues page immediately.
Tags
tags:
- docs:manual
- tool:villa-villekulla
- pattern:pipelineNorth
slots:
- slug: villa-villekulla
context:
- User manual for the pipeline IDE
- slug: pippi-guide
context:
- CLI companion to the IDESouth
slots:
- slug: villa-villekulla-tutorial
context:
- Detailed tutorial examples
- Linking tutorial as child of manual
- slug: villa-villekulla-troubleshooting
context:
- Extended troubleshooting guideProvenance
Technical Appendix
This appendix documents the underlying system that Villa Villekulla is built on.
The Oculus Foundation
Villa Villekulla runs on top of Oculus, a graph-based knowledge system where:
- Nodes are markdown files with embedded code fences
- Fences can be executed, producing data or presentation
- Interpolation (
${path.to.value}) resolves before execution - Includes (
!include other-node:section) compose documents
This means pipeline definitions benefit from the same composition model:
- Global configuration nodes can feed into pipeline parameters
- Pipeline YAML undergoes interpolation before parsing
- External values are resolved at load time (early binding)
Parameter Binding
Block parameters can be bound from multiple sources:
# Static value (hardcoded)
timeout: 30
# From pipeline context (runtime)
customer_id: ${context.order.customer_id}
# From graph (early binding, resolved at load time)
api_url: ${config-node:api.yaml.production_url}
default_warehouse: ${settings:inventory.yaml.default_warehouse}Early Binding Rule: External graph references (${node:path}) resolve when the pipeline loads, not when it runs. This means:
- External configuration is snapshot at pipeline start
- Changes to config nodes don't affect running pipelines
- If you need dynamic external data, load it via a source block
Context Binding: Context references (${context.path}) resolve at runtime as data flows through.
Initial Context Seeding
The pipeline's initial context comes from its configuration section:
# In the pipeline markdown file
## Configuration
```yaml
initial_context:
environment: production
api_base: ${global-config:api.yaml.base_url}
default_timeout: 30
feature_flags:
new_pricing: true
This initial context:
1. Has graph interpolation resolved at load time
2. Merges with any runtime parameters passed to the pipeline
3. Becomes the starting context for the first block
If you need external lookups during execution, seed them in initial context. The pipeline itself stays self-contained—all external data enters through the front door.
---
### Appendix A: Pipeline Markdown Schema
A pipeline is a markdown file with this structure:
```markdown
# Pipeline Name
Description of what this pipeline does. This prose section
is documentation—shown in `pippi describe` and the UI.
## Configuration [config]
```yaml
# Initial context (merged with runtime params)
initial_context:
key: value
external_value: ${other-node:path}
# Canvas settings (for grid view)
canvas:
width: 12
height: 8
# Block display overrides
overrides:
B3:
emoji: "🔍"
label: "Lookup"Pipeline [pipeline]
# Option 1: Sequential chain (simple pipelines)
source:
fence: data-loader
params:
path: /data/orders.json
chain:
- fence: validate-order
- fence: enrich-customer
customer_api: ${context.api_base}/customers
- fence: calculate-total
output: format-confirmation
# Option 2: Grid coordinates (complex flows)
# See Appendix B for grid syntaxExecution History [executions]
| Run | Started | Status | Duration |
|---|
Notes
Any additional documentation, gotchas, tips.
Tags
- tool:villa-villekulla
- pattern:pipeline
- domain:orders
### Section Purposes
| Section | Purpose |
|---------|---------|
| H1 + prose | Name and documentation |
| Configuration | Initial context, canvas settings, display overrides |
| Pipeline | The actual flow definition (chain or grid) |
| Execution History | Virtual table, auto-populated |
| Notes | Additional documentation |
| Tags | Discoverability |
---
### Appendix B: Grid Coordinate Syntax
The grid format compiles down to the same execution model as chain format. It's an intermediate representation that preserves visual layout.
#### Coordinate System
Cells are addressed as `{column}{row}`:
- Columns: A-P (1-16)
- Rows: 1-16
- Example: `B3` = column 2, row 3
#### Grid Definition
```yaml
format: grid
canvas: [12, 8] # width × height
blocks:
A1:
type: source
fence: data-loader
config:
path: /data/orders.json
C1:
type: block
fence: validate-order
E1:
type: gate
fence: check-threshold
condition: "total > 1000"
E0: # Row above gate = true path
type: connector
direction: up
E2: # Row below gate = false path
type: connector
direction: down
G0:
type: block
fence: high-value-flow
G2:
type: block
fence: standard-flow
A4:
type: landing-pad
catches:
type: ValidationError
C4:
type: block
fence: log-error
# Explicit connections (optional - inferred from adjacency)
connections:
- from: A1
to: C1
- from: E0
to: G0
- from: E2
to: G2
# Compensation mappings
compensation:
C1: rollback-validation
E1: release-reservationBlock Types
| Type | Behavior |
|---|---|
source |
Entry point, no inputs |
block |
Standard transform, 180° flow |
connector |
Direction change, identity transform |
gate |
Decision, perpendicular outputs (up=true, down=false) |
landing-pad |
Error catcher, activates on match |
sink |
Exit point, no outputs |
Flow Rules (Implicit Connections)
If no explicit connections defined:
- Adjacency: Horizontally adjacent blocks connect automatically
- Direction: Flow follows block orientation
- Connectors: Change direction explicitly
- Gates: Output perpendicular (up/down from horizontal input)
Compilation to Chain
The grid compiles to a directed graph, then serializes to chain format:
Grid: Compiles to:
A1 → C1 → E1 source: A1
↓↑ chain:
E2 E0 - C1
↓ ↓ - E1:
G2 G0 true: [G0, sink]
false: [G2, sink]The grid format preserves:
- Visual positioning (for UI reload)
- Explicit block placement
- Spatial relationships
The compiled chain preserves:
- Execution order
- Branching logic
- Error handling paths
Both represent the same pipeline. Grid is the authoring format; chain is the execution format.
Appendix C: Pippi Command Grammar
The pippi CLI uses this grammar for fence and pipeline references:
# Basic fence execution
pippi run <fence-ref>
pippi run <fence-ref> --params '<json>'
# Fence reference formats
<fence-ref> ::= <label> # Global label lookup
| <node-slug>:<label> # Specific node + label
| <node-slug>:<index> # Specific node + position
| <fence-id> # Direct fence ID
# Examples
pippi run validate-order # Label lookup
pippi run order-pipeline:validate # Node + label
pippi run order-pipeline:3 # Node + index (0-based)
# Pipeline operations
pippi run <pipeline-ref> # Run pipeline
pippi run <pipeline-ref> --from-step <n> # Start from step
pippi run <pipeline-ref> --context '<yaml>' # Override initial context
# Session operations
pippi show <session-id> # Show session detail
pippi show <session-id> --step <n> # Show context at step
pippi diff <session-a> <session-b> # Compare sessions
pippi replay <session-id> --from-step <n> # Re-run from step
# Pipeline management
pippi list # List available pipelines
pippi describe <fence-ref> # Show fence documentation
pippi validate <pipeline-ref> # Check pipeline validityPath Syntax for Peek/Poke
Graph references use dot-notation:
<path> ::= <section> # Section prose
| <section>.<fence-type> # First fence of type
| <section>.<fence-type>.<key> # Key in fence
| <section>.<label>.<key> # Labeled fence + key
# Examples
config.yaml.api_url # api_url from first yaml in config
settings.production.timeout # timeout from 'production' labeled fenceInterpolation Syntax
${<reference>}
<reference> ::= context.<path> # Runtime context value
| <node-slug>:<path> # External node value (early bound)
| <path> # Local document path
# Examples
${context.order.customer_id} # Runtime: from flowing context
${global-config:api.yaml.base_url} # Load-time: from external node
${config.yaml.timeout} # Load-time: from this documentAppendix D: Gridflow YAML Syntax
The gridflow fence stores spatial pipeline definitions. The shape is defined by block positions—no explicit canvas size needed.
Example: Order Processing with Approval Gate
1 2 3 4 5 6 7
┌───┬───┬───┬───┬───┬───┬───┐
A │ │ │ → │APR│ ↓ │ │ │
├───┼───┼───┼───┼───┼───┼───┤
B │SRC│ → │GAT│ │ → │FUL│SNK│
├───┼───┼───┼───┼───┼───┼───┤
C │ │ │ → │ → │ ↑ │ │ │
├───┼───┼───┼───┼───┼───┼───┤
D │ │ │ │ │ │ │PAD│ ← landing pad
└───┴───┴───┴───┴───┴───┴───┘```gridflow
blocks:
# Source - emits initial context
B1:
fence: source
emits: { order_id: "ORD-123", total: 1500 }
# Flow to gate
B2: arrow-right
# Conditional gate - high value orders need approval
B3:
fence: threshold-gate
condition: "context.total > 1000"
# True path (90° from input = up when entering from left)
A3: arrow-right
A4:
fence: approval-gate
prompt: "Approve order ${context.order_id}?"
body: "Total: $${context.total}"
A5: arrow-down
# False path (270° from input = down when entering from left)
C3: arrow-right
C4: arrow-right
C5: arrow-up
# Merge point - arrow accepts from above OR below
B5: arrow-right
# Continue to completion
B6:
fence: fulfill-order
compensate: cancel-order # runs on unwind
B7: sink
# Landing pad - catches errors, lives on grid like any fence
D7:
fence: landing-pad
catches: [payment-failed, inventory-error]
`` `Flow Rules (Relative to Input Direction)
| Fence Type | Accepts From | Outputs To |
|---|---|---|
| Regular fence | any direction | 180° (opposite side) |
| Arrow | any direction except output | fixed direction |
| Binary gate | any direction | 90° or 270° based on condition |
| Trinary gate | any direction | 90°, 180°, or 270° |
| Landing pad | teleport (error state) | 180° from entry |
Gate outputs are relative to input:
Input from LEFT: Input from TOP:
90° (CW) → DOWN 90° → RIGHT
180° → RIGHT 180° → DOWN
270° (CCW) → UP 270° → LEFTArrows as Merge Points
Arrows accept input from any direction except their output. This naturally handles conditional merges:
Both paths feed into B5 (→):
- True path: ↓ at A5 outputs down → B5 accepts from above
- False path: ↑ at C5 outputs up → B5 accepts from below
- B5 always outputs right → both paths continue to FULNo special merge fence needed for conditional branches (only one path executes).
Landing Pads as Portals
Landing pads are just fences on the grid. When an error occurs:
- Executor enters error state
- Compensation stack unwinds (reverse order)
- Landing pad "captures" the error (like a portal)
- Execution continues from landing pad
No special syntax—they work by being available to capture weird states.
Pipelines as Fences (Modules)
A gridflow is itself a fence. Drop it onto another grid:
blocks:
C3:
fence: order-validation-workflow # another gridflow
# context flows in, transformed context flows outThe palette has a Modules tab showing available pipelines. Sub-workflows are first-class.
Foreach (Iteration)
foreach is a higher-order fence that accepts another fence (single or chain) and applies it to each item:
blocks:
C3:
fence: foreach
over: "context.line_items" # array to iterate
apply: validate-line-item # fence to run per item
collect: "context.validated" # where results goThe applied fence receives each item as context, returns transformed item. Results collect into specified path. Iteration is sequential by default; parallel execution is a fence config option.
Timeouts
Fences can specify timeout, with global default fallback:
blocks:
B3:
fence: external-api-call
timeout: 30s # per-fence override
# Global default in pipeline config
defaults:
timeout: 60sTimeout triggers a catchable error. Landing pads can handle timeout like any other error type.
Error Context
When errors occur, the context includes consistent diagnostic info:
error:
type: timeout # or: validation-error, api-error, etc.
message: "Fence timed out after 30s"
source: B3 # grid position where error occurred
source_fence: external-api-call
source_params: # config that was passed to fence
url: "https://api.example.com"
method: POST
timestamp: "2026-01-27T00:30:00Z"
stack: # compensation stack at time of error
- { position: B2, fence: reserve-inventory }Landing pads receive this enriched context for logging, alerting, or recovery logic.
Expression Language
Conditions and interpolations use a minimal Python-safe expression syntax:
# Conditions (gate configs, foreach filters)
condition: "context.total > 1000"
condition: "context.status in ['pending', 'review']"
condition: "len(context.items) > 0 and context.priority == 'high'"
# Interpolation in strings
prompt: "Order ${context.order_id} totals $${context.total}"
url: "https://api.example.com/orders/${context.order_id}"Supported: comparisons, boolean logic, in, len(), arithmetic, string operations. No side effects, no imports, no function definitions.
Triggering Pipelines
Pipelines are fences. Anything can trigger them:
- pippi CLI:
pippi run my-pipeline --context '{...}' - pippi daemon: Watches for events, schedules, webhooks
- Another pipeline: Drop as fence on grid
- API call: POST to executor endpoint
- Event bus: STUFFY subscription triggers run
pippi is the human-facing tool, but pipelines don't know or care what triggered them. Context comes in, transformed context comes out.
Execution Model
State: { x, y, direction, context, compensation_stack }
Loop:
1. Execute fence at (x, y)
2. Push compensation if fence defines one
3. If error → unwind stack → teleport to landing pad
4. If gate → evaluate → set direction (90° or 270°)
5. Apply flow rules → compute next (x, y, direction)
6. If out of bounds or no receiver → haltComparison with Pipeline Syntax
| Aspect | pipeline |
gridflow |
|---|---|---|
| Topology | Linear only | 2D spatial |
| Branching | Not supported | Gates + arrows |
| Landing pads | Not supported | Spatial portals |
| Sub-workflows | Inline only | First-class modules |
| Authoring | Text | Visual IDE or YAML |
Appendix E: Live Rendering & Interactive Capabilities
🚢 Taka-Tuka Land
When a pipeline produces live, interactive output (dashboards, editors, visualizations), the rendering architecture follows a capability-based model.
Websockets as Capabilities
A websocket connection is not a special communication channel—it's a capability object placed in context:
context:
render:
type: websocket
url: ws://localhost:8080/session/abc123
capabilities:
- send_event
- subscribe
- update_domThe executor doesn't know anything about websockets. It just sees a context entry with methods it can call.
Component Blooming
When a fence wants to render interactive output, it "blooms"—registering itself with STUFFY (the event bus):
- Fence starts and checks for
context.rendercapability - Registration: Fence tells STUFFY "I'm component X at session Y"
- Render: Fence emits initial DOM/state through the capability
- Listen: Fence subscribes to events targeted at its component ID
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Fence │──────▶│ STUFFY │◀─────▶│ Browser │
│ (blooms) │ │ (event bus)│ │ (websocket) │
└─────────────┘ └─────────────┘ └─────────────┘Bidirectional Events
All communication flows through STUFFY:
| Direction | Flow | Example |
|---|---|---|
| Out | Fence → STUFFY → Browser | Update chart data |
| In | Browser → STUFFY → Fence | User clicked button |
Event format:
event:
type: user_interaction
component: chart-abc123
payload:
action: click
target: data_point_7Why This Architecture?
Executor stays tiny: No websocket code in the core. The executor just passes context through fences.
Fences compose: A fence that renders a chart doesn't need to know about the transport. It just uses the capability.
Transport agnostic: The render capability could be backed by websocket, SSE, polling, or direct DOM manipulation in tests.
Testable: Mock the capability, assert the events sent.
Session Lifecycle
1. User opens pipeline view
2. System creates session, injects render capability into context
3. Pipeline runs, fences bloom as they execute
4. Components send updates through capability → STUFFY → browser
5. User interactions flow back: browser → STUFFY → fence handlers
6. Pipeline completes, session can persist for continued interaction
7. User closes view, session cleans upImplementation Notes
The render capability is injected by the session manager, not the executor:
# Session manager (knows about websockets)
context['render'] = WebsocketCapability(session_id, ws_url)
# Executor (doesn't know about websockets)
result = fence.execute(context) # Just passes context throughThis keeps the executor simple and makes the rendering system pluggable.
Document
- Status: 🔴 Unverified
Changelog
- 2026-01-26 17:19: Node created by mcp - Creating the user manual for Villa Villekulla pipeline IDE - Commodore 64 manual style, documentation-first development