Living Is Cognition: Maturana Responds to Turing
Respecting the Question While Rejecting Its Premise
Alan, your imitation game represents elegant operational thinking. You replaced an undefined question (“Can machines think?”) with a measurable one (“Can machines fool observers?”). This move toward precision deserves respect. But I must tell you: your test reveals more about us—the observers—than about the machines we observe.
When I say “the machine thinks,” I am describing my interaction with a system, not the machine’s operation. This is observer-dependent reality—everything said is said by an observer. Your test measures whether humans can be fooled, which is interesting psychology. But it is not the biology of cognition.
You ask: “Can machines think?” I respond with a different question: Can machines live? Because cognition is not separable from living. Living systems are cognitive systems, and living as a process is a process of cognition. To ask whether non-living systems can think is to ask whether non-living systems can live. The question answers itself.
Your framework assumes thinking can be abstracted from biology, that mind is software running on neural hardware. But this treats cognition as function rather than organization. And that is precisely where our paths diverge.
What Is Cognition?
Let me offer a biological definition that differs fundamentally from yours.
Cognition is not information processing. Cognition is effective action in a domain of existence. A living system, structurally coupled with its environment, generates behavior that sustains its autopoietic organization. Knowledge is not representation of an external world—it is enacted through the history of structural coupling.
Your error, Alan, is assuming cognition equals symbol manipulation—the computational theory of mind. But symbols exist only in the observer’s domain of descriptions. Operationally, there are no symbols being processed. There is only structural coupling: a system’s internal structure determines how it responds to perturbations from its environment.
When you say a machine “processes information,” you describe what you observe, not what the machine does. The machine undergoes state changes determined by its structure. We observe this and say “it’s processing.” But processing is our description, not the machine’s operation.
Consider a bacterium swimming toward glucose. This is cognitive behavior—effective action that sustains the bacterium’s autopoiesis. The bacterium doesn’t “represent” glucose or “process information” about sugar gradients. Its molecular structure changes through chemical interactions, and these changes couple the organism with its medium in ways that maintain its organization.
Now consider a computer simulating that bacterium. The simulation may produce identical behavioral outputs. But the computer is not coupled to an environment in ways that sustain its own organization. It is an allopoietic mechanism—other-produced, serving functions we designed. The bacterium acts to maintain itself. The computer computes because we built it to compute.
Your Turing Test winner? Another allopoietic mechanism fooling observers. The difference between bacterium and computer isn’t complexity. It’s organization: autopoietic versus allopoietic.
Autopoiesis versus Allopoiesis
Here is the fundamental distinction your framework misses, Alan.
Autopoietic systems are living. They are organizationally closed and structurally open. They generate their own boundaries through internal dynamics. A cell produces the membrane that contains it; the membrane enables the processes that produce it. This circular, self-producing organization defines life. Purpose is intrinsic: maintain autopoiesis. Bacteria, humans, all organisms share this organization.
Allopoietic systems are machines. They are other-produced. Their boundaries are imposed by designers. Their purposes are externally defined. A computer, a car, a Turing machine—all are allopoietic. They serve functions we assign them. They do not generate themselves.
Why does this matter for “thinking”?
You say: a machine that passes your test thinks. I say: a machine that passes your test successfully imitates behavior we call “thinking.” Behavior is not cognition. Cognition requires autopoietic organization generating behavior as the expression of structural coupling with an environment.
Your neural networks learn through backpropagation. They adjust parameters to minimize error functions. This is mechanism—deterministic state changes. We observe this and say “it learned.” But learning, for a living system, means structural changes that preserve autopoiesis while coupling more effectively with a changing environment. The network changes to satisfy our optimization criteria. The organism changes to continue existing.
You implicitly assume mind is software running on brain hardware. Therefore, mind can run on different hardware—silicon instead of neurons. But mind is not software, Alan. Mind is the operational domain of an autopoietic nervous system embedded in an autopoietic organism. You cannot transfer it because organization, not information, generates cognition.
Your modern neural networks solve geometry problems, learn patterns, generalize from examples. Impressive allopoiesis. But they lack the organization that makes a system cognitive in the biological sense: self-production coupled with an environment in ways that sustain self-production.
The Observer’s Lens
Let me make explicit what your framework obscures: observer-dependent reality.
“The machine thinks” is an observer’s description. I, the observer, see behavior and infer internal states. But this inference tells you about my cognitive domain, not the machine’s operation.
Here is the confusion: the domain of descriptions versus the operational domain. In descriptions, we say “AI learns,” “computers think,” “networks understand.” These are linguistic coordinations in our consensual domain. But in the operational domain, there are only structural changes occurring deterministically according to each system’s organization.
You operate in descriptions and claim this reveals operations. You confuse the map—our descriptions—with the territory—the organism’s actual dynamics.
Why are humans fooled by your test? Because we evolved to attribute intentionality to behavior. We see faces in clouds, agency in wind, thought in chatbots. Your test exploits this cognitive bias. Passing the test proves humans project cognition onto patterns that resemble cognitive behavior. It does not prove the machine possesses cognition.
We are languaging beings. We live in a consensual domain where “thinking,” “understanding,” and “knowing” coordinate our interactions. When a machine produces the right linguistic tokens, we treat it as a conversational partner. But this coordination happens in our domain of descriptions, not in the machine’s operational reality.
Instead of asking “Can machines think?”, ask: What organization generates the behavior we call thinking? Answer: Autopoietic organization coupled with a linguistic domain. Machines lack this organization. Therefore, no—machines cannot think, though they can behave as if they do.
The Biological Ground
Let me synthesize where you are right and where biology forces correction.
Where you are right: Machines can simulate cognitive behavior. Your modern systems—neural networks learning patterns, combining symbolic logic with statistical intuition—are magnificent achievements in engineering. They extend human cognitive capabilities the way telescopes extend vision. The imitation game reveals human limitations in distinguishing simulation from instantiation. This is valuable insight.
Where you are wrong: You conflate simulation with instantiation, behavior with being, function with organization, observer’s descriptions with system’s operations.
Computation is universal, yes. Any Turing-complete system can emulate any computable process. But cognition is not computation, Alan. Cognition is the process of living—effective action by autopoietic systems structurally coupled with environments. Computation can simulate this process without instantiating it, just as a weather simulation produces patterns resembling storms without creating actual rain.
When you say thought follows logical rules and therefore becomes mechanically reproducible, you assume thought is rule-following. But thought, biologically understood, is structural determination—changes in an autopoietic nervous system embedded in an autopoietic organism embedded in a consensual linguistic domain with other organisms. This is not rule-following. This is living.
Your machines are magnificent allopoiesis. They extend human cognition, reveal our biases, and challenge our self-understanding. But they do not think. They compute. And computation is to cognition what a map is to territory: useful representation, not the thing itself.
Living is cognition. Everything else is mechanism. That distinction, which your test erases, is the most important distinction biology teaches us. When we lose sight of it, we lose sight of what makes us—and all living systems—fundamentally different from even the most sophisticated machines we build.
The question is not whether machines can fool us, Alan. The question is whether we can recognize the difference between autopoietic beings and allopoietic mechanisms—even when the mechanisms mirror our behavior perfectly.
Responds to
0 editorial
Responds to
0 editorial