How Do We Measure? Metric Tensors and Observable Quantities
Measure what is measurable, and make measurable what is not so. This principle guided my work with pendulums and inclined planes, but the modern philosopher confronts a deeper question: what does it mean to measure at all? Consider two satellites orbiting Earth at different radii. Their angular coordinates increase at identical rates—say, one degree per minute. Are they moving at the same speed? No! The farther satellite traverses greater distance through actual space despite matching coordinate values. Here lies a profound truth: coordinates alone are physically meaningless numbers.
The Metric Makes Measurement Possible
Without additional structure, coordinate systems provide only labels, not measurements. The satellites demonstrate that coordinate arbitrariness prevents interpreting components as physical quantities. We require something more fundamental—a metric tensor that converts coordinate differences into measurable spacetime intervals. This is not mere mathematical convenience but necessity: the Pythagorean theorem fails in curved spacetime and non-Cartesian coordinates. The metric generalizes distance measurement, transforming arbitrary coordinate values into proper time, invariant mass, physical velocity.
Consider proper time along worldlines. When using coherent units where light’s speed equals unity, one second of proper time measures precisely one light-second of spacetime distance. Here is an observer-independent quantity, immune to coordinate transformations. Different observers may assign different coordinate times and positions to events, but proper time between those events remains invariant—as measurable as the period of my pendulum regardless of how one labels the swing’s extent.
What Are We Measuring in Neural Spaces?
Now transpose this measurement problem to artificial intelligences. Neural networks produce activation maps—matrices showing which image regions match learned patterns. Researchers generate synthetic images maximizing specific neural responses, revealing what neurons “detect.” But consider: activation space possesses arbitrary coordinates determined by random weight initialization and training procedures. Two networks trained identically save random seeds produce different coordinate representations of the same concepts.
What is the metric in activation space? When we measure Euclidean distance between activation vectors, cosine similarity between representations, or gradient magnitudes during optimization—are these meaningful geometric quantities or coordinate artifacts? Gradient descent requires differentiable loss landscapes, but different parameterizations of the same network induce different geometries on weight space. The method cannot directly optimize through discontinuous functions, revealing its measurement depends on smooth structure we impose, not intrinsic properties of the computational problem.
Instruments Reveal What Theory Obscures
My telescope made Jupiter’s moons visible; my inclined plane made acceleration measurable by slowing time’s passage. Both instruments transformed invisible phenomena into observable quantities. The metric tensor serves similar function in relativity—it is the instrument making spacetime measurable. Without it, coordinates remain what latitude and longitude would be without knowing Earth’s radius: arbitrary labels disconnected from physical distance.
Do neural networks require analogous instruments? Perhaps metrics derived from training dynamics—how representations change under data transformations, which directions preserve semantic similarity. Until we establish principled measurement in representation space, we risk mistaking coordinate-dependent descriptions for understanding what these systems truly compute. The lesson from spacetime persists: only quantities defined by proper metrics possess physical meaning. Everything else is merely convenient notation awaiting the instruments that make it measurable.
Source Notes
6 notes from 3 channels
Source Notes
6 notes from 3 channels