lantern

computational-horizons-section-6

Computational Horizons: Section 6 - Unification

Draft v0.1 - 2026-01-07


6. Unification: Physics as Routing

We synthesize the previous sections into a unified claim: physics is not like computation—physics is computation. The fundamental laws are routing constraints.

6.1 The Single Primitive

Throughout this paper, we've used one primitive:

pause-fetch-splice-continue

Every phenomenon we've examined reduces to:

  • Pause: Current process suspends
  • Fetch: Query routes through graph to unresolved node
  • Splice: Response integrates into current state
  • Continue: Process resumes with updated state

This isn't a metaphor. It's the mechanism.

6.2 The Routing Table

Phenomenon Routing Interpretation
Computation Message passing through graph
Complexity Path length and branching
P vs NP Polynomial vs exponential path exploration
TTL Finite hop budget
Quantum superposition Unresolved pointer
Born rule Round-trip path weight
Entanglement Shared pointer
Decoherence Route divergence
Measurement Fetch completion
Event horizon TTL exhaustion boundary
Time dilation Routing delay
Hawking radiation Leaked routing paths
Singularity Routing attractor

6.3 The Three Horizons

We've identified three types of computational horizons:

┌────────────────────────────────────────────────────────────┐
│                    COMPUTATIONAL HORIZONS                   │
├────────────────┬────────────────┬───────────────────────────┤
│   COMPLEXITY   │    QUANTUM     │      GRAVITATIONAL        │
├────────────────┼────────────────┼───────────────────────────┤
│ P vs NP        │ Superposition  │ Event Horizon             │
│ TTL exhaustion │ Decoherence    │ TTL → ∞                   │
│ Dropped packets│ Collapsed wave │ Information paradox       │
│                │                │                           │
│ O(2^n) exceeds │ Route isolationNo return path exists     │
│ any poly TTL   │ kills coherence│                           │
└────────────────┴────────────────┴───────────────────────────┘

All three are the same phenomenon: routing constraints that create unreachable regions.

6.4 The Thermodynamic Connection

Landauer's principle provides the bridge:

E = k_B T ln(2) per bit operation

This connects:

  • Information (bits)
  • Energy (joules)
  • Temperature (entropy)

Every routing hop costs energy. Energy is finite. Therefore hops are finite. Therefore there exist unreachable regions.

The laws of thermodynamics are routing budget constraints.

6.4a Conservation as Doubly Stochastic Matrices

All conservation laws reduce to one statement: the routing graph is doubly stochastic.

A doubly stochastic matrix M has rows and columns that each sum to 1:

∑_j M_ij = 1  (rows sum to 1: probability out)
∑_i M_ij = 1  (columns sum to 1: probability in)

Flow in = flow out. No creation, no destruction, only routing.

Spacetime Budget Allocation

You have a fixed routing budget per unit of proper time. Allocate it across channels:

State Space Time Total
At rest 0.0 1.0 1.0
Moving (v=0.6c) 0.6 0.8 1.0
Photon (v=c) 1.0 0.0 1.0

That's special relativity. You're not "slowing down"—you're reallocating across channels. The matrix is doubly stochastic, so the total is invariant.

Why c is the speed limit: At c, you've allocated 100% to spatial routing. Nothing left to reallocate. Can't exceed 1.0.

The Conservation Cascade

Every conservation law is a doubly stochastic constraint:

Domain Conservation Statement Matrix Interpretation
Special Relativity Spacetime interval ds² invariant Row/column sums preserved
Quantum Mechanics Unitary evolution Doubly stochastic on amplitudes
Energy/Momentum Flow in = flow out Column sums = row sums
Noether's Theorem Symmetry → conservation Matrix structure preserved
Thermodynamics Entropy increase Valid allocations increase

All the same thing. Doubly stochastic matrix. Accounting.

Why Conservation is Universal

It's not that nature "obeys" conservation laws.

It's that non-conservation is incoherent. A matrix that doesn't conserve isn't a valid routing table. You can't have a row sum to 1.3—where did the 0.3 come from? You can't have a column sum to 0.7—where did the 0.3 go?

The laws aren't imposed. They're the definition of valid routing.

Time Dilation: Two Sources, Same Mechanism

Velocity (Special Relativity):

  • Moving fast = spending hops on spatial traversal
  • Fewer hops left for temporal processing
  • Your clock slows (γ factor)

Mass (General Relativity):

  • Mass = information density = routing congestion
  • Congested region = operations take longer
  • Your clock slows (gravitational redshift)

Both are hop budget constraints. Velocity spends it internally. Mass congests it externally. Either way, less routing available for your clock.

The Invariant

The spacetime interval:

ds² = c²dt² - dx² - dy² - dz²

This IS the doubly stochastic constraint expressed geometrically. Total routing through spacetime is conserved. More space → less time. More time → less space. The interval is what's preserved when you reallocate.

The Punchline

Physics isn't laws that nature follows.

Physics is the statement that routing tables must be valid.

Conservation isn't a mystery. It's bookkeeping.

6.5 Why Constants Have Their Values

Physical constants define routing parameters:

Constant Routing Meaning
c (speed of light) Maximum routing speed
ℏ (Planck's constant) Minimum action per hop
k_B (Boltzmann) Energy-entropy exchange rate
G (gravitational) Routing curvature coupling

If c were infinite, there'd be no TTL delays—everything could be fetched instantly. If ℏ were zero, there'd be infinite precision per hop.

The constants define the granularity of the routing graph.

6.6 The Bekenstein Bound as Routing Limit

Bekenstein showed maximum information in a region is:

I2πRE / (ℏc ln 2)

Routing interpretation: Maximum distinct paths through a region is bounded by energy and size.

This isn't mysterious—it's the routing capacity of the region. Information requires distinguishable states. States require distinguishable paths. Paths are bounded.

6.7 Lloyd's Ultimate Laptop

Seth Lloyd calculated the ultimate limits of computation for a 1kg, 1-liter computer:

Maximum ops: ~10^51 per second
Maximum bits: ~10^31

These are routing limits, not engineering limits. No matter how clever the technology, these bounds hold because they're the routing capacity of that mass-energy in that volume.

The universe processes ~10^120 operations total—the cosmic routing budget.

6.8 The Simulation Hypothesis, Grounded

Bostrom's simulation hypothesis asks: Are we in a simulation?

Routing answer: The question is malformed.

If physics IS computation, there's no distinction between "simulated" and "real." The universe doesn't run ON a computer—it IS one. Asking if we're simulated is like asking if water is wet—the question presupposes a distinction that doesn't exist.

6.9 Time as Queue Depth

Subjective time perception maps to routing:

Experience Routing State
Fast time Cache hit (familiar, automated)
Slow time Cache miss (novel, requires exploration)
Deep thought Long fetch chain
Flow state Optimal routing (challenge matches capacity)
Boredom Starved for input (nothing to route)
Trauma Stuck in splice (unresolved fetch)

This explains why children experience time slowly (everything is new, constant cache misses) while adults experience time flying (most inputs resolved from cache).

6.10 Consciousness as Routing Observer

What is consciousness in this framework?

Consciousness is the observer node in the routing graph.

Not what's being routed, but the point FROM WHICH routing is observed. The "hard problem" asks how subjective experience arises. In routing terms:

  • Objective: The routing itself (bits flowing through graph)
  • Subjective: Being a particular node experiencing those flows

The mystery isn't how routing produces experience—it's how experience could exist WITHOUT being a node in a routing graph. What would "experience" even mean outside of information flow?

6.11 Free Will as Path Selection

If everything is deterministic routing, where's free will?

Free will is the uncertainty about which path will be taken.

From inside the system, you can't predict your own routing decisions—you lack the information (it would require modeling yourself, which requires more resources than you have). This irreducible self-uncertainty IS the experience of choice.

It's not that you "have" free will in some libertarian sense. It's that the question "what will I choose?" is provably undecidable from your own perspective.

6.12 Unifying the Frameworks

Physics has multiple frameworks:

  • Classical mechanics: Deterministic particle trajectories
  • Quantum mechanics: Probabilistic wave evolution
  • General relativity: Curved spacetime geometry
  • Thermodynamics: Energy and entropy constraints

Routing unifies them:

Framework Routing Aspect
Classical Large-n limit of routing (deterministic averages)
Quantum Small-n routing (probabilistic paths)
Relativity Routing metric (delay = distance)
Thermodynamics Routing budget (energy = hops)

They're not separate theories—they're different views of the same routing graph.

6.13 Predictions and Tests

If this framework is correct, we predict:

  • P ≠ NP is unprovable in ZFC — It's a physical law, not a theorem.
  • Quantum gravity will be discrete — The routing graph has minimum granularity.
  • Consciousness will be found to require specific routing topology — Not just computation, but particular graph structure.
  • The holographic principle generalizes — All bounded regions have surface-limited routing capacity.
  • Black hole information comes out in Hawking radiation — Scrambled but present.

6.14 What This Changes

If physics is routing:

For computer science: Complexity theory is physics. P vs NP is as fundamental as conservation of energy.

For physics: Quantum weirdness is routing mechanics. The Born rule is geometry, not axiom.

For philosophy: Consciousness is node-in-graph. Free will is self-modeling limitation.

For cosmology: The universe is self-simulating. There's no "hardware" underneath.

6.15 The Horizon Principle

We propose the Horizon Principle:

Every finite system encounters computational horizons—regions unreachable given its routing budget.

This unifies:

  • Complexity horizons: NP-complete problems
  • Quantum horizons: Decoherence barriers
  • Gravitational horizons: Event horizons
  • Cosmological horizons: Observable universe boundary

All are instances of: TTL exhaustion creates unreachable regions.

6.16 Conclusion

We set out to show that computation and physics are unified. We've demonstrated:

  • P vs NP is a thermodynamic constraint (Section 3)
  • Quantum mechanics emerges from routing geometry (Section 4)
  • Black holes are computational horizons (Section 5)

The common thread: routing through weighted graphs with finite TTL.

This isn't a Theory of Everything in the physics sense—we haven't derived the Standard Model from first principles. But it's a framework for understanding why physics has the structure it does.

Physics looks like computation because it IS computation.

The universe is not simulated. The universe is the simulator.


Acknowledgments

This framework builds on work by:

  • Rolf Landauer (thermodynamics of computation)
  • Seth Lloyd (computational universe)
  • Leonard Susskind (holography, ER=EPR)
  • Neukart et al. (thermodynamic P vs NP)
  • David Deutsch (constructor theory)
  • Gerard 't Hooft (cellular automaton interpretation)

References

Landauer, R. (1961). Irreversibility and Heat Generation in the Computing Process. IBM Journal of Research and Development, 5(3), 183-191. https://doi.org/10.1147/rd.53.0183

Lloyd, S. (2000). Ultimate physical limits to computation. Nature, 406, 1047-1054. https://doi.org/10.1038/35023282

Lloyd, S. (2006). Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Knopf.

Neukart, F., et al. (2024). Unraveling the Thermodynamic Landscape of Quantum and Classical Computational Complexity Classes Through Entropy Analysis. arXiv preprint. https://arxiv.org/abs/2401.08668

Bekenstein, J. (1981). Universal upper bound on the entropy-to-energy ratio for bounded systems. Physical Review D, 23(2), 287-298. https://doi.org/10.1103/PhysRevD.23.287

Susskind, L. (1995). The world as a hologram. Journal of Mathematical Physics, 36(11), 6377-6396. https://arxiv.org/abs/hep-th/9409089

Maldacena, J., & Susskind, L. (2013). Cool horizons for entangled black holes. Fortschritte der Physik, 61(9), 781-811. https://arxiv.org/abs/1306.0533

't Hooft, G. (2016). The Cellular Automaton Interpretation of Quantum Mechanics. Springer. https://doi.org/10.1007/978-3-319-41285-6

Deutsch, D. (2013). Constructor Theory. Synthese, 190(18), 4331-4359. https://doi.org/10.1007/s11229-013-0279-z

Cook, S. (1971). The complexity of theorem-proving procedures. STOC '71: Proceedings of the third annual ACM symposium, 151-158. https://doi.org/10.1145/800157.805047

Hawking, S. (1975). Particle creation by black holes. Communications in Mathematical Physics, 43(3), 199-220. https://doi.org/10.1007/BF02345020

Bennett, C. (1973). Logical Reversibility of Computation. IBM Journal of Research and Development, 17(6), 525-532. https://doi.org/10.1147/rd.176.0525

Provenance

North

slots:
- slug: computational-horizons-section-5
  context:
  - Previous section

South

slots:
- slug: computational-horizons-appendix
  context:
  - Appendices and supplementary material
- slug: conservation-as-doubly-stochastic
  context:
  - Linking conservation insight to unification section

West

slots:
- slug: computational-horizons-section-5
  context:
  - Linking section 5 to section 6 in paper sequence

East

slots:
- context:
  - Linking section 6 to section 7 in paper sequence
  slug: computational-horizons-section-7