When You Have to Run the Experiment: Computational Irreducibility

Richard Feynman Examining physics
Complexity Computation Determinism Emergence Irreducibility
Outline

When You Have to Run the Experiment: Computational Irreducibility

Why Three Bodies Break the Math

Here’s a thing that bothers physicists more than they let on. We have Newton’s laws. We’ve had them for over 300 years. They’re exact, they’re beautiful, and they work. Give me the position and velocity of every particle, and I can—in principle—tell you where everything will be at any future time.

In principle.

Try it with three bodies. Three planets orbiting each other, or the sun and two planets, or three stars locked in gravitational dance. We have the equations. They’re not complicated. But here’s the thing: there’s no general solution. No formula you can plug numbers into and get the answer.

Why not? It’s not that we haven’t tried hard enough. It’s not that we need bigger computers or cleverer mathematicians. There is no formula. The three-body problem is genuinely unsolvable in closed form for almost all initial conditions.

This should disturb you. We know the rules perfectly. We can write them down. And yet we cannot predict where the bodies will be without actually simulating every step along the way. The equations are deterministic—the future is completely specified by the present—but the prediction requires running the whole thing forward, step by step.

Nature doesn’t give us shortcuts.

Laplace’s Demon Meets Its Limits

There’s a famous thought experiment from 1814. Laplace imagined an intellect—a “demon”—that knew the position and momentum of every particle in the universe, along with all the laws of physics. Such a being, he argued, could predict everything that would ever happen. The future would be as clear as the past.

This was the dream of determinism taken to its logical conclusion. If physics is just particles bouncing according to rules, and you know all the particles and all the rules, then prediction should be straightforward. Just compute.

Machines work this way. Given identical inputs and identical starting states, an algorithm always produces identical outputs. This determinism is what makes computation reliable—you can trace any calculation, verify it, reproduce it exactly. The universe, we assumed, was just a very large machine.

But the dream has a problem, and it’s not quantum mechanics (though that doesn’t help). Even in a perfectly classical, deterministic world, prediction fails for most systems. Not because we can’t know the initial conditions precisely enough—though we can’t—but for a deeper reason.

Some systems cannot be predicted any faster than they actually run.

When You Can’t Skip Steps

Take a double pendulum. It’s just two arms connected at a joint, swinging under gravity. The equations of motion are entirely deterministic. Nothing random, nothing uncertain. You could write them out on a napkin.

But start several identical pendulums with starting angles that differ by one part in a billion. Within seconds, their trajectories diverge completely. One swings left while another goes right. They trace wildly different paths through phase space despite starting from almost the same point.

This is chaos—sensitivity to initial conditions. But computational irreducibility goes further. It says: even if you knew the initial conditions perfectly, you still couldn’t predict the outcome without computing every intermediate step.

Consider cellular automata. You have a row of cells, each either on or off. Each cell updates based on simple local rules—look at your neighbors, decide your next state. The rules fit on a single line of text. And yet: some rule sets produce elaborate, evolving patterns with no apparent connection to the rules themselves. Triangles emerge. Fractals appear. Structures move across the grid like living things.

Can you predict what the pattern will look like after a million steps? Not in general. There’s no shortcut. The system is its own fastest simulator. To know what happens at step one million, you have to compute steps one through 999,999. The only way to find out is to run it.

This isn’t about chaos or sensitivity. It’s about compressibility. Some computations cannot be compressed. The map isn’t simpler than the territory because the territory is the map.

Deterministic but Unpredictable

So here’s the tension we have to sit with:

The system is completely deterministic. Given the initial state and the rules, the outcome is fixed. Nothing random happens. Nothing uncertain lurks. The future is, in some abstract sense, “already there” in the present.

And yet we cannot predict that future without actually running the system forward.

This breaks something in our intuition about what “knowing the rules” means. We assume that if we understand the mechanism, we can predict the behavior. We assume that knowing the parts tells us about the whole. We assume that equations yield answers.

But emergence doesn’t work that way. Simple things combine to form complex things with properties the parts don’t have. Water molecules don’t contain wetness. Ants don’t contain colony behavior. Neurons don’t contain consciousness. The collective is genuinely greater than the sum of its parts.

Reductionism—the idea that you understand a system by decomposing it—fails precisely here. Taking things apart doesn’t show you how they work together. The equations of motion for individual particles don’t tell you what patterns will emerge when trillions interact.

And even when we know the rules perfectly, even when we can write them down, even when determinism guarantees a unique outcome—we still can’t always tell what that outcome is. Not without running the experiment.

Science describes. Science doesn’t always predict.

The System Is Its Own Fastest Simulator

What does this mean for physics? For our understanding of the universe?

First: the limits are fundamental, not technological. It’s not that we need better computers. Some systems cannot be predicted faster than they evolve. Adding more processing power just lets you simulate faster—it doesn’t let you skip the simulation.

Second: this explains why certain phenomena resist our predictions. Weather breaks down after a few days not just because we can’t measure the atmosphere perfectly, but because the atmosphere is computationally irreducible. Even a 99.99% accurate simulation produces wrong predictions—the tiny errors compound through chaos until the simulated future bears no resemblance to the real one.

Third: this is a feature, not a bug. The universe doesn’t owe us shortcuts. Nature has no obligation to organize itself into solvable problems. Most systems are computationally irreducible; the “pockets of reducibility” like planetary motion or projectile trajectories are the exceptions, not the rule.

And maybe that’s beautiful. It means systems can surprise us even when we know all their rules. It means complexity emerges genuinely—not as illusion or limitation but as real structure in the world. Strange attractors in chaotic systems reveal order within the chaos, patterns that don’t repeat but maintain coherent structure. The universe contains more than its parts.

When you watch a double pendulum swing wildly, or a cellular automaton generate intricate structures, or weather patterns swirl across the globe—you’re seeing computational irreducibility in action. The only way to know what happens is to let it happen. The only way to find out is to run the experiment.

Sometimes that’s the answer physics gives us: go and see.

Source Notes

8 notes from 4 channels