The Compression Principle: Logarithms and Neural Encoding
The Tool and The Brain
In 1614, John Napier published his logarithm tables to solve a computational bottleneck: multiplication was slow and error-prone for astronomers and navigators working with large numbers. His solution was elegant—convert multiplication into addition by mapping numbers to their logarithms. If you needed to multiply 5.61 by 46.6, you looked up their logarithms, added them, then converted back. The brain, it turns out, discovered this same optimization millions of years earlier.
Sensory neurons encode stimulus intensity logarithmically. A whisper registers perhaps 20 decibels, normal conversation 60, a jet engine 120. The physical sound pressure spans six orders of magnitude, but our perception scales linearly with the logarithm of intensity. This is the Weber-Fechner law: perceived magnitude equals log(stimulus). The nervous system compresses exponential input ranges into linear neural response rates using the same mathematical principle that Napier invented for calculation.
From Slide Rules to Neurons
The slide rule makes the compression mechanism physically concrete. Numbers are positioned at distances proportional to their logarithms. Sliding one scale against another adds these distances, which adds logarithms, which multiplies the original values. The spacing between 1 and 2 equals the spacing between 2 and 4, between 4 and 8. Powers of two are equally spaced because their logarithms increase by constant increments. Physical distance encodes information logarithmically, converting a complex operation into simple spatial addition.
Neural encoding follows identical logic. Consider the dynamic range problem: a sensory system must represent stimuli spanning many orders of magnitude, but neurons have limited firing rates—perhaps 1 to 100 spikes per second. A linear encoding wastes resolution: either you saturate quickly and lose sensitivity to intense stimuli, or you sacrifice precision for weak signals. Logarithmic compression solves this by allocating neural response proportionally to relative changes rather than absolute magnitudes. Doubling stimulus intensity produces the same increment in firing rate whether you’re going from 1 to 2 or 1000 to 2000.
This explains why synaptic weights and firing rates follow lognormal distributions rather than Gaussian ones. When changes are multiplicative—proportional to current values—the logarithm of the variable becomes normally distributed. Dendritic spines grow or shrink by fractions of their current size, not by fixed amounts. This multiplicative plasticity naturally produces the heavy-tailed, skewed distributions observed across neural systems. The brain doesn’t add noise; it multiplies it, and logarithmic scaling is the optimal encoding for multiplicative processes.
Optimal Encoding
Logarithmic compression emerges whenever systems face the fundamental problem of representing multiplicative relationships efficiently. Whether it’s Briggs reforming Napier’s tables to base ten for easier decimal calculation, slide rules converting multiplication to spatial addition, or neurons encoding sensory magnitudes across vast dynamic ranges—the solution is the same. Convert exponential growth into linear space by taking logarithms.
This isn’t coincidence; it’s information theory. Multiplicative relationships pervade nature: compound growth, signal ratios, sensory transduction across scales. The logarithm transform converts these to additive form, which is easier to compute, transmit, and store. A 17th-century mathematician working with astronomical measurements and a sensory neuron responding to sound pressure are solving the same compression problem. Both discover that logarithmic encoding is the optimal strategy for handling multiplicative dynamics within linear processing constraints.
The tool that transformed human calculation reveals how biological systems efficiently encode information. Whether mechanical calculators or neural circuits, logarithmic compression is a fundamental principle—not a clever trick but the mathematically determined solution to representing exponential variation in bounded channels.
Source Notes
9 notes from 2 channels
Source Notes
9 notes from 2 channels