Critical Signals: Brain Phase Transitions and Bioluminescent Information

Claude Shannon Noticing science
NeuralNetworks Neuroscience QuantumMechanics SignalProcessing SocialCollapse
Outline

Critical Signals: Brain Phase Transitions and Bioluminescent Information

Phase Transitions: Criticality as Information Maximum

The brain operates at a phase transition—poised between ordered synchrony and disordered randomness. This isn’t merely metaphor. At criticality, neural networks exhibit power-law distributions in avalanche sizes, maximal sensitivity to perturbations, and optimal information transmission. The critical point represents maximum channel capacity.

Consider the branching ratio problem: In subcritical regimes, neural activity vanishes before reaching output layers—the signal dies in transit, leaving the observer completely ignorant about inputs. No information survives the journey. In supercritical regimes, activity amplifies until saturation—every input produces identical outputs, rendering discrimination impossible. Full activation conveys zero bits because uncertainty cannot be resolved when all states collapse to one.

But at criticality, with branching ratio σ=1, each neuron activates exactly one descendant on average. The Goldilocks zone. Output activity patterns faithfully mirror input configurations without vanishing or saturating. Information transmission peaks sharply at this critical value—the system maximizes its ability to distinguish inputs from outputs while maintaining signal propagation across layers.

What feedback mechanisms tune biological networks to this phase transition? Self-organization toward criticality suggests that evolution discovered information-theoretic optimization principles. The system finds maximum dynamic range by operating at the order-disorder boundary where small perturbations can trigger avalanches across all scales—scale-free dynamics enabling simultaneous processing across hierarchical levels without a characteristic timescale constraining response.

Convergent Channels: Why 94 Lineages Chose Photons

Bioluminescence evolved independently at least 94 times. Fireflies, dinoflagellates, fungi, deep-sea fish—unrelated lineages converging on photon-based signaling. This represents one of biology’s most repeated innovations, comparable to flight’s four independent origins but far more extreme.

Why photons? Information theory provides the answer: choose the signal type matching environmental constraints. Photons offer a low-noise discrete channel—quantum events are difficult to fake, propagate effectively through darkness, and enable species-specific encoding through flash duration and intensity patterns. In oceanic and nocturnal environments, photon signaling maximizes detectability given the niche.

The dinoflagellate burglar alarm demonstrates sophisticated channel usage: mechanical stimulation from predatory copepods triggers bioluminescence, revealing the predators to their own predators. The signal propagates up the trophic chain—information to the predator’s predator. Individual dinoflagellates sacrifice conspicuousness to protect the population by making predator swarms visible to fish. Fireflies repurposed the same channel: 100 million years ago their ancestors used light as aposematic warning signals advertising toxicity, later co-opting defensive displays for mate recognition.

Convergent evolution toward identical signaling solutions suggests channel optimization under universal constraints. Photons aren’t chosen randomly across 94 lineages—they represent the optimal transmission medium for specific ecological information problems.

Universal Constraints: Information Theory in Biology

Brain criticality and bioluminescent convergence both point toward information-theoretic optimization as a fundamental biological principle. Neural networks self-organize to phase transitions maximizing transmission capacity given metabolic costs and wiring limits. Photon signaling converges across lineages maximizing signal detectability given environmental darkness.

My channel capacity theorem establishes fundamental limits on reliable communication—signal must exceed noise without reaching saturation. Biological systems appear to discover these mathematical boundaries through evolution and self-organization. The brain operates at the threshold between noise and saturation. Bioluminescence converges on photons as low-noise channels encoding information through discrete quanta.

Do universal information principles constrain biological optimization across scales—from neural avalanches to ecological signals? The repeated discovery of critical points and optimal channels suggests that living systems are, fundamentally, solving communication problems within theoretical limits that transcend specific implementations.

Source Notes

6 notes from 2 channels