The Neuron as Circuit: Hodgkin-Huxley Dynamics and Biological Computation
Look, here’s what I love about the Hodgkin-Huxley equations: they take something as gloriously messy as a neuron—this squishy, wet, biological thing floating in saltwater—and show you it’s just a circuit problem. Not metaphorically. Actually. The same physics that Volta stumbled onto when he stacked copper and zinc discs soaked in brine, the same differential equations that describe a mass bouncing on a spring. It’s all there.
When people talk about the brain, they wave their hands and say “complexity” like it’s magic. But Hodgkin and Huxley did something beautiful: they stripped away the mysticism and wrote down four coupled differential equations that reproduce the full time course of an action potential. Four equations! One for membrane voltage, three for gating variables. That’s it. That’s the “core equation of neuroscience.” And once you see it, you realize neurons aren’t doing anything fundamentally different from what an RLC circuit does. They’re just solving differential equations in real time, every millisecond, with molecular machinery.
The Membrane Is Just a Capacitor
Start simple. The neuron’s membrane is a lipid bilayer—essentially an insulator separating two conducting fluids. Any thin insulator between conductors is a capacitor. That’s not biology, that’s just circuit theory. The membrane stores charge the way any capacitor does: , where is capacitance and is voltage. The excess negative charge inside creates an electrical potential relative to outside.
Now take the time derivative. Current is just charge moving: . So . That’s it. That’s the starting point. The rate at which membrane voltage changes equals the net current flowing through it, divided by capacitance. It’s Kirchhoff’s current law applied to a biological membrane.
But membranes aren’t perfect insulators. They’re leaky. Ion channels puncture through—protein pores that let specific ions cross. Some are always open, creating a leak conductance. Others are voltage-gated, opening and closing depending on membrane potential. Each type of channel acts like a variable resistor, and the concentration gradients across the membrane act like batteries.
Here’s where Volta’s pile comes in. When he stacked copper and zinc discs with electrolyte-soaked cloth between them, he was creating a voltage source from chemical potential. Neurons do the same thing. Potassium is more concentrated inside than outside—that’s a chemical gradient. The membrane separates charge—that’s an electrical gradient. At some specific voltage, these balance. That’s the equilibrium potential , the voltage where potassium would stop flowing even if channels were open.
For any ion, the current follows Ohm’s law: . The conductance tells you how many channels are open. The driving force is the difference between actual membrane voltage and equilibrium—it’s literally “battery voltage minus membrane voltage.” If the membrane is more negative than , potassium gets pulled in. If it’s more positive, potassium flows out.
Put it together: . That’s charge conservation. The membrane voltage changes at a rate determined by capacitance and net current flow. It’s the same equation you’d write for an RC circuit, except the resistances aren’t fixed—they depend on voltage and time.
Coupled Equations, Nonlinear Gates
Now for the interesting part. The conductances themselves aren’t constant. Ion channels are proteins with gates—parts of the molecule that can swing open or closed. These gates respond to voltage. When the membrane depolarizes, sodium channels start opening. When it stays depolarized, they inactivate. Potassium channels open more slowly in response to depolarization and stay open during repolarization.
Hodgkin and Huxley modeled this probabilistically. Each gate has some probability of being open, and that probability follows a first-order differential equation: . The functions and are voltage-dependent rate constants—how fast gates open and close. They’re empirical, fit to squid axon data, but the form is motivated by basic statistical mechanics of molecular transitions.
For potassium, you need four independent gates open for the channel to conduct, so the open probability is . The conductance becomes , where is the maximum conductance when all channels are fully open. Sodium is trickier: three activation gates and one inactivation gate , giving .
So you have four coupled ODEs: one for , one for , one for , one for . The voltage equation depends on the gating variables because they set the conductances. The gating equations depend on voltage because the rate functions and change with membrane potential. Everything feeds back on everything else.
This is where it gets richer than a spring. The mass-spring system is linear. If and are solutions, so is . You get superposition. The Hodgkin-Huxley equations don’t have that. The gating variables enter as powers (, , ), and the rate functions are nonlinear in voltage. Superposition fails.
But that nonlinearity is the whole point. It gives you thresholds, bistability, all-or-none spikes. Sodium gates open fast when you push past threshold—positive feedback. Inactivation and potassium activation follow behind—negative feedback. Together they produce the beautiful regenerative loop of an action potential: rapid sodium-driven depolarization, delayed potassium-mediated repolarization, undershoot as potassium keeps flowing, then gradual return to rest.
The same mathematical structure appears everywhere. Differential equations turning into polynomial-like forms when you change coordinates. Coupled oscillators with rate constants governing transitions. It’s not confined to neurons. Chemical reaction kinetics, population dynamics, even circuits with nonlinear elements—they all share this language.
The Physics of Computation
Here’s what really strikes me: the neuron is computing. Not metaphorically—literally solving a four-dimensional system of differential equations in real time. Every millisecond, the voltage and gating variables evolve along trajectories determined by the coupled dynamics. The ion channels don’t “know” they’re doing calculus. They’re just proteins responding to local electric fields. But the collective behavior is a dynamical system that integrates inputs, generates spikes, filters signals, maintains memory through refractoriness.
When you extend this to multiple compartments—slicing the neuron into segments along dendrites and axons—you get cable equations coupling neighboring voltage values. Each compartment still has local Hodgkin-Huxley dynamics, but now there’s spatial structure. Axial currents flow between adjacent segments, and activity can propagate, reflect at boundaries, interact with local active conductances.
This connects to dendritic computation. Synaptic inputs aren’t just passively summed. They interact with voltage-gated channels distributed along dendrites. Coincident inputs can trigger local dendritic spikes. The spatial arrangement of synapses matters. Temporal order matters. The neuron becomes a complex nonlinear integrator with sub-cellular computational zones.
And yet—despite all this richness—it’s still fundamentally circuit theory. Charge conservation. Ohm’s law for each ionic species. Capacitance slowing voltage changes. Conductances opening and closing according to kinetic rate equations. The complexity is in the coupling, the nonlinearity, the spatial extent. But the underlying principles are the same ones Volta used to make his pile, the same ones that govern any electrical network.
Beautiful Physics in Biology
What Hodgkin and Huxley did was build an effective theory. They didn’t start from quantum mechanics of protein conformational changes. They didn’t simulate every water molecule in the pore. They worked at the right level of abstraction: voltage-dependent conductances described by gating variables following first-order kinetics. The parameters are empirical, but the structure is motivated by physics and statistics.
This is how you make progress. Find the right variables, write down the simplest equations that capture the essential dynamics, fit the parameters to data, then see what the model predicts. Hodgkin-Huxley does this brilliantly. It reproduces spike shapes, refractory periods, repetitive firing, subthreshold oscillations—phenomena across timescales from microseconds to seconds.
And it scales. You can take the same framework, change the channel densities and kinetics, and model different neuron types. You can embed it in network models to study population dynamics. You can add more ion channel types, modulatory currents, calcium dynamics. The core machinery stays the same.
Every time a neuron fires, it’s solving this system. Every spike is a trajectory through a four-dimensional phase space, carved out by voltage-dependent conductances and driving forces. The brain isn’t doing something mystical. It’s doing physics—beautiful, nonlinear, coupled differential equations running on molecular hardware.
That’s what I mean when I say the neuron is a circuit. Not because it looks like resistors and capacitors on a breadboard, but because it obeys the same conservation laws, the same dynamical principles. The elegance is in seeing past the biological complexity to the underlying simplicity: charge flows, gates open, voltages change. It’s all just differential equations, and biology solved them before we even knew to ask the question.
Source Notes
11 notes from 3 channels
Source Notes
11 notes from 3 channels