Instruments of Seeing: Telescopes, Neural Networks, and Extended Perception

Galileo Galilei Noticing technology
NeuralNetworks Networks Cognition SystemsTheory Epistemology
Outline

Instruments of Seeing: Telescopes, Neural Networks, and Extended Perception

Instruments That See: Telescopes and Neural Networks

When I turned my spyglass toward Jupiter in 1610, grinding and positioning those glass lenses with mathematical precision, I did not merely magnify—I transformed what counts as seeing. Four celestial bodies orbiting that planet became visible, invisible to every philosopher before me, not because their eyes were inferior but because the unaided eye cannot reach across such distance. The telescope extended perception beyond biological limits, revealing moons that exist whether or not we see them.

Neural networks function similarly as instruments that see on our behalf. AlexNet’s first layer learns edge detectors—oriented patterns like Gabor filters—through mathematical operations rather than optical glass. Activation maps show what these networks “see” at each layer: early layers responding to edges, deeper layers to textures, final layers to faces and objects. Feature visualization reveals these learned representations by generating synthetic images that maximally activate specific neurons, much as my lunar drawings revealed mountain structures through careful observation of shadow gradients.

Yet here emerges the ancient controversy in modern form. Church authorities insisted that if God intended us to see Jupiter’s moons, He would have made them visible to the naked eye. Instrumentally-mediated perception, they argued, cannot constitute genuine knowledge. I hear echoes of this skepticism when researchers question whether AI-detected patterns are “real”—if humans cannot perceive adversarial perturbations or understand why a network classifies correctly, does the network truly see? The octopus offers a biological parallel to this epistemological puzzle: colorblind, lacking the photoreceptor types for color discrimination, yet matching environmental colors perfectly through chromatophore pigment sacks. The creature achieves color camouflage—177 changes per hour, 200 millisecond reactions—while perceiving the world monochromatically. Perception and performance diverge.

Hierarchies Revealed: From Celestial to Compositional

My telescope revealed hierarchical structure: moons orbit Jupiter, Jupiter orbits the Sun—nested systems invisible from Earth’s surface without magnification. This organization was always present, waiting for instruments capable of revealing it. Neural networks similarly reveal hierarchical feature learning: layer one detects edges from pixels, layer two combines edges into corners and textures, layer five responds to faces despite ImageNet containing no “face” labels. The hierarchy exists implicit in visual data, invisible to unaided human cognition but discoverable through learned mathematical transformations.

Both instruments do not merely magnify but reveal organization. Activation maps trace this progression explicitly—we can visualize what each layer detects, watching simple features compose into complex representations. My observations of lunar mountains via shadow patterns employed similar hierarchical analysis: raw light and dark regions combined to reveal three-dimensional structure.

Instrument-Dependent Perception

My telescopic observations overturned cosmology—Earth became one planet among others rather than creation’s center. What epistemology do neural networks overturn? Perhaps this: human perception is not objective truth but one biological solution among many. Networks see adversarial patterns invisible to us, ignore textures we notice, discover shortcuts we cannot articulate. Octopuses achieve color matching through skin-based photoreceptors rather than eye-based vision. If machines, animals, and humans see differently, whose vision is canonical?

The lesson extends beyond any single instrument: all perception is instrument-dependent, whether through retina, neural processing, convolutional layers, or distributed skin photoreceptors. My telescope taught that seeing requires instruments. Modern vision science teaches that instruments reveal no “view from nowhere”—only the patterns their particular structure makes visible.

Source Notes

6 notes from 2 channels