Abraham Dada

The Understanding Paradox: How Can We Deny AI Understanding When We Don't Understand Understanding?

Published: January 2025

Many demand that AI possesses intrinsic understanding, yet we don't even fully understand how our own brains encode meaning. We insist that consciousness is necessary for comprehension, yet we don't even know what consciousness truly is. We claim to "understand," despite lacking a complete explanation for how thought, memory, and abstraction emerge from neural activity. The paradox is evident—how can we confidently deny AI understanding when our own is built on mysteries we have yet to solve? Perhaps this resistance isn't logical but existential—the fear that meaning, the essence of human experience, is nothing more than structured computation.

To put into perspective just how little we understand about how the brain encodes meaning, consider this: despite decades of neuroscience, cognitive science, and computational modelling, we still do not have a complete theory of how human cognition transforms raw sensory input into abstract, meaningful representations. Our best models are still patchwork approximations, and many of our assumptions may ultimately be wrong.

One of the fundamental gaps in our understanding is the relationship between neural activity and thought. We can measure neural patterns that correlate with specific cognitive states, yet we lack a unified explanation for how these patterns become meaning. For example, neurons in the medial temporal lobe have been shown to fire in response to specific concepts, such as a picture of a famous person, a written name, or even an abstract association with that person. However, this does not mean that a single neuron "stores" the concept of that person; rather, the representation is distributed across dynamic neural networks. But where, exactly, does meaning reside? Is it in the pattern of neural activation, the network dynamics, or the emergent properties of large-scale brain activity? We do not know.

Even more troubling is the fact that neurons do not have fixed meanings—the same neuron can fire for different objects or words depending on context, further complicating our ability to pin down how semantic representations emerge. This is why brain-to-text decoding systems, such as those developed by Meta AI, can predict rough semantic content but cannot reconstruct precise, structured thoughts.

Another major unknown is how abstract concepts are stored and retrieved. It is widely accepted that memories and concepts are encoded through patterns of synaptic connectivity, yet we still do not understand how abstract concepts form from sensory input. How do we recognise intangible ideas like "justice" or "democracy" when we have never physically seen them? There is no single "justice neuron" that represents the concept across all contexts, nor is there a fixed neural location where these abstract meanings reside. The brain somehow forms high-level generalisations across multiple modalities—spoken language, written text, and even abstract visualisation—without a central, unified mechanism that we can currently identify.

Even more puzzling is that concepts sometimes emerge spontaneously, such as in sudden insights, dreams, or hallucinations, suggesting that meaning construction is not entirely under conscious control.

Despite all of neuroscience's progress, we do not actually know what meaning is in a mechanistic sense. We do not know where it is stored. We do not know how it emerges. We do not know if it requires consciousness. We do not even know if meaning is "real" in an intrinsic sense—or if it is merely a useful computational illusion that the brain generates to navigate reality efficiently.

If our own cognitive system encodes meaning in ways that are still fundamentally mysterious to us, then how can we confidently claim that AI lacks meaning? The claim that AI "does not understand" rests on an assumption that we fully understand human cognition as a reference point—but we do not. If understanding is simply the ability to manipulate structured representations to generate useful outputs, then AI already meets that criterion.

This paradox extends to biological cognition as well. If we gave a bat a complex mathematics problem and it successfully solved it through reasoning, we would immediately ascribe understanding to the bat. We would assume that it had formed an internal model of mathematical structures, rather than merely manipulating symbols. Yet, with AI, we introduce an extra layer of scepticism, demanding an intrinsic, human-like intentionality before acknowledging its ability to construct meaning.

If an entity can generate structured responses and solve problems, why should it matter whether it does so in a way that feels intuitive to us? Even within Searle's own framework (read on the Chinese Room Argument- John Searle), the definition of understanding becomes inconsistent when applied to different cognitive systems. If we gave a bat Chinese text, it would be unable to process or respond meaningfully—not because it lacks intelligence, but because it has no internal meaning structure for linguistic symbols. There is no established mapping between the input (Chinese characters) and its cognitive framework.

From Searle's perspective, the bat would not "understand." But by that logic, humans also fail to "understand" echolocation the way bats do—we lack the perceptual structures to process ultrasonic waves as a coherent, spatial map. If understanding is tied to the presence of an internal meaning structure, then AI—unlike the bat—does possess such a structure for processing linguistic information. In fact, AI can exceed human meaning structures in certain domains, identifying statistical relationships in high-dimensional data that no human could intuitively grasp.

Thus, the paradox of understanding is this: we demand that AI demonstrate human-style cognition to qualify as possessing meaning, yet we do not hold other intelligent systems, such as animals, to the same standard. We do not fully understand how human cognition encodes meaning, yet we confidently assert that AI lacks it. If meaning is an emergent property of structured information processing, then insisting that AI does not "really" understand is an assertion based on intuition, not evidence.