When More Bits Don’t Help: Sampling and Nyquist
How Fast to Sample?
You need to convert an analog signal to digital. A microphone captures sound waves, varying continuously in time. An antenna receives radio signals, electromagnetic oscillations at millions of cycles per second. A sensor measures temperature, pressure, vibration—smooth curves tracing physical phenomena.
How many samples per second do you need? Sample too slowly and you miss important details. The signal changes between samples, information disappears into the gaps. Sample too quickly and you waste storage, processing power, transmission bandwidth. Every extra sample costs memory, computation, energy.
Is there an optimal rate? A precise threshold that captures everything important but nothing redundant? The Nyquist-Shannon sampling theorem provides the exact answer. It reveals a fundamental relationship between continuous and discrete representations of information. Understanding this threshold determines every digital recording system, every analog-to-digital converter, every sensor network.
The fundamental problem is this: continuous signals appear to contain infinite information. But under the right conditions, a finite number of samples can reconstruct them perfectly.
Why Bandwidth Sets the Limit
Real-world signals are not arbitrary. They cannot change at infinite speed. Physical constraints limit how rapidly signals can vary. This limitation is called bandwidth.
Bandwidth B measures the range of frequencies present in a signal. A pure tone at 440 Hz (the musical note A) has essentially zero bandwidth—it occupies a single frequency. Human voice spans roughly 300 Hz to 3400 Hz, giving it about 3 kHz bandwidth. Human hearing extends from 20 Hz to 20 kHz, a 20 kHz bandwidth. FM radio stations transmit within a 15 kHz bandwidth.
These bandwidth limits arise from physical reality. Telephone wires have capacitance and inductance that filter out high frequencies. Human vocal cords vibrate within mechanical constraints. Radio transmitters and receivers use filters that pass only specific frequency ranges. Every channel, every medium, every physical system imposes bandwidth limitations.
Bandwidth creates a crucial constraint: a bandlimited signal cannot change arbitrarily fast. If the highest frequency component is B Hz, the signal makes at most 2B zero-crossings per second. Between samples, the signal evolves smoothly and predictably. Given sufficient samples, you can perfectly interpolate between them using sinc functions—the mathematical realization that bandlimited signals are already “discrete” in the frequency domain.
This is the key insight. Bandwidth limits the degrees of freedom in a signal. A signal bandlimited to B Hz contains finite information per unit time, not infinite information. The continuous waveform only appears to require infinite precision. Actually, it is completely determined by a finite set of samples.
The Rate That Captures Everything
The Nyquist-Shannon sampling theorem states the precise requirement: to perfectly reconstruct a signal bandlimited to frequency B, you must sample at rate fs > 2B.
The factor of two is exact, not approximate. If the highest frequency in your signal is B Hz, sample at twice that rate or faster. This is the Nyquist rate: fs = 2B.
Why exactly twice? The answer lies in the frequency domain. When you sample a continuous signal at rate fs, you create copies of the signal’s frequency spectrum at intervals of fs. The original spectrum occupies frequencies from 0 to B. Sampling creates replicas centered at ±fs, ±2fs, ±3fs, and so on.
If fs ≥ 2B, these spectral copies do not overlap. You can recover the original signal perfectly by applying a low-pass filter that isolates the base spectrum from 0 to B. This is not approximation—it is exact reconstruction. The discrete samples contain all information present in the continuous signal.
If fs < 2B, the spectral copies overlap. Frequencies above B/2 fold back into the base spectrum, creating aliasing. High frequencies masquerade as low frequencies. A 15 kHz tone sampled at 8 kHz appears as a 1 kHz tone. Once aliasing occurs, the information is corrupted irreversibly. You cannot undo aliasing after sampling. The high frequency component and its low frequency alias become indistinguishable.
Consider CD audio. Human hearing extends to approximately 20 kHz. The Nyquist rate is 2 × 20 kHz = 40 kHz. CDs sample at 44.1 kHz, slightly above the theoretical minimum. The extra margin allows practical anti-aliasing filters to roll off smoothly rather than cutting off instantaneously at 20 kHz, which would require ideal filters that don’t exist physically.
Telephone systems use 8 kHz sampling for speech. Human voice bandwidth is roughly 4 kHz (actually 300 Hz to 3400 Hz), so 8 kHz sampling satisfies the Nyquist criterion. This is why telephone audio sounds limited compared to in-person conversation—frequencies above 3400 Hz are filtered out before sampling, removing the brightness and clarity of natural speech.
Professional audio systems often use 96 kHz or 192 kHz sampling rates. Human hearing stops at 20 kHz, so why sample at 96 kHz? The extra bandwidth provides margin for filter design, reduces quantization noise spreading, and allows signal processing headroom. But fundamentally, these rates exceed the Nyquist requirement for human hearing. Sampling above 40 kHz captures no additional audible information.
When More Samples Add Nothing
At or above the Nyquist rate, sampling is lossless. Discrete samples perfectly represent the continuous signal. You can reconstruct the original waveform exactly using sinc interpolation—a weighted sum of shifted sinc functions, one per sample.
This reconstruction is mathematically perfect for bandlimited signals. It is not an approximation. The continuous signal and its sampled representation contain exactly the same information.
Sampling above the Nyquist rate does not add information. It creates redundancy. If you sample at 4B instead of 2B, you double the number of samples but capture no new frequency content. The extra samples interpolate between the minimum required samples. They confirm what could be computed from fewer measurements.
Redundant sampling wastes resources: storage space, transmission bandwidth, processing time. For a sensor network recording environmental data continuously, doubling the sample rate doubles power consumption, halves battery life, and fills storage twice as fast—without capturing additional information about the environment.
Before sampling, engineers apply anti-aliasing filters. These low-pass filters remove frequency components above B, ensuring the signal truly is bandlimited before sampling begins. Without anti-aliasing, high-frequency noise aliases into the base spectrum, appearing as false low-frequency signals. Anti-aliasing is not optional—it is required for the Nyquist theorem to apply.
This explains why sample rates are chosen as they are. The sampling rate is always approximately twice the maximum frequency of interest. Telephone: 8 kHz for 4 kHz bandwidth. CD audio: 44.1 kHz for 20 kHz bandwidth. Professional audio: 96 kHz provides margin for 44 kHz usable bandwidth, though humans cannot hear above 20 kHz.
Modern systems operate near theoretical limits. Early telegraph systems achieved perhaps 10% of the Nyquist limit due to crude timing and filters. Telephone modems progressed from 300 baud in the 1960s to nearly 3500 baud in the 1990s on 3 kHz telephone lines—approaching the 6000 symbol/second Nyquist limit. Engineers optimized physical layer performance to within 60-80% of theoretical maximums, then shifted to increasing bits per symbol through sophisticated modulation.
From Continuous to Discrete Without Loss
The Nyquist-Shannon sampling theorem bridges analog and digital domains. It provides a precise requirement: fs ≥ 2B. This is not a heuristic or approximation. It is an exact threshold for perfect reconstruction.
Sampling converts time-domain constraints into frequency-domain constraints. Continuous time becomes discrete samples. Infinite precision becomes finite measurements. Yet no information is lost if bandwidth is limited and sampling is adequate.
Every analog-to-digital converter relies on this principle. Digital audio, digital video, sensor networks, software-defined radios, medical imaging—all depend on the Nyquist criterion. It determines how fast sensors must read, how much storage recordings require, how much bandwidth transmission consumes.
The theorem reveals a fundamental limit in signal processing. You cannot recover frequency content above half your sampling rate. You cannot distinguish aliased frequencies from true base-band signals. The sampling rate determines what information you can capture.
Information-theoretic bounds are real and inescapable. Continuous and discrete representations are equivalent when signals are bandlimited. The Nyquist rate is the exact boundary between sufficient sampling and information loss.
Source Notes
6 notes from 1 channel
Source Notes
6 notes from 1 channel