Artificial Intelligence Is Not Yet Smart, But It's Fast
"The map is not the territory." – Alfred Korzybski
The Limitations of Current AI
AI is not yet smart, not dumb either— it's just fast. Though it's rapidly improving, we've barely tapped the surface. AI, in its current state, is actually quite limited; it cannot reason well. It thrives on brute-force statistical learning, relying on data and computation, rather than flexible reasoning and novel generalisation.
Humans, on the other hand, use heuristic-based learning—heuristics that have been refined through emotions and billions of years of evolution. Our brains do not merely analyse vast datasets and optimise probabilities; rather, they have developed mechanisms to infer meaning, adapt quickly, and make decisions based on limited and often ambiguous information. Human intelligence is deeply intertwined with emotional and experiential learning, allowing us to navigate an unpredictable world with relative ease.
AI is like a monster that constantly needs to be fed. Before it can act, it needs to consume massive amounts of data. It doesn't "learn" from one-shot experiences the way humans or even basic animals do. It needs to grind through thousands, sometimes millions, of examples just to approximate what a child can generalise after a few tries. And even then, once it reaches a level of competence, it has to be fed again—more data, more fine-tuning, more compute. I think even the most cutting-edge AI systems today—whether using reinforcement learning, large-scale unsupervised training, self-supervised objectives, transfer learning, or pseudo-randomness in simulated environments—still lack something fundamental. There's a level of connection to reality that biological organisms have that AI doesn't. That connection isn't just about inputs and outputs; it's about being embedded in the chaos of real, unpredictable environments. You can simulate randomness, sure, but it's still bounded by code. It's still detached. AI can be probabilistic, but it doesn't live in uncertainty the way organisms evolved to.
Stochasticity and Emotion in Evolutionary Intelligence
Key word—Stochasticity. Evolution is fundamentally tuned for randomness and uncertainty, while AI systems are built to thrive in structured, rule-based environments. They excel where variables are known and predictable—but struggle in truly open-ended scenarios, where adaptation must emerge on the fly. I strongly believe that many people underestimate evolution. Many people think of it as a slow, linear process—something that took billions of years to shape life as we know it. Evolution is the only algorithm that has ever successfully adapted to the fundamental stochastic nature of reality while simultaneously operating within its infinite state space. Unlike AI, which is often constrained by predefined objectives and finite datasets, evolution explores an unbounded optimisation landscape, continuously refining solutions without direction or external input. Reality itself presents an infinite array of possible states, yet evolution has not only navigated this uncertainty but has actively thrived within it, producing life forms capable of adapting to unpredictable environments. No engineered system has come close to matching this level of adaptability—AI operates in closed, controlled spaces, whereas evolution dynamically selects for survival in an ever-changing, boundless reality.
One of the most profound outcomes of this optimisation process is emotion. Aligning with Daniel Dennett's illusionism, I speculate that the only reason we perceive emotions as "emotions" is because of the post hoc feeling we associate with them after they arise. They are not just internal experiences but heuristics for navigating infinite state spaces, enabling organisms to make rapid decisions without needing to calculate every possible outcome. Humans compress vast amounts of information into an intuitive response, allowing for adaptability in ways that even the most advanced AI systems struggle to replicate. Without emotions, every decision would require an exhaustive evaluation of all variables, rendering real-time action impossible in an unpredictable world.
I speculate that a "feeling" likely is what motivates us to take an action rather than the feeling existing prior to the action itself. In other words, emotions are not merely passive experiences but functional drivers of behaviour, arising as evolutionary shortcuts to steer actions in ways that promote survival. The sensation of fear, for instance, is not just an arbitrary state of discomfort—it is the most efficient algorithm for avoiding threats in an uncertain reality. Complex emotions are arguably the most advanced algorithm that evolution has ever produced. They enable humans to process uncertainty, weigh risk and reward, and engage in social cooperation far beyond what brute-force intelligence could achieve. Emotions are, in many ways, one of the pinnacles of evolutionary computation, allowing organisms to thrive in an infinite state space where logic alone would be insufficient.
Reinforcement learning is a machine learning technique where an agent (e.g. human) learns to make decisions in an environment to maximise a reward signal, through trial and error and feedback from the environment. Emotions likely operate much like a reinforcement learning algorithm, guiding behaviour through a system of rewards and penalties that shape decision-making over time. The phenomenological experience of an emotion—feeling anger, sadness, joy, or fear—can be thought of as a feedback mechanism similar to how reinforcement learning agents adjust their actions based on reward signals. Just as a reinforcement learning model maximises expected rewards and minimises penalties to optimise its future decisions, emotions act as internal signals that reinforce beneficial behaviours and discourage harmful ones. For example, anger functions as a penalty signal, driving an organism to respond to perceived injustice or threats, thereby increasing the likelihood of defending resources or social standing. Happiness, on the other hand, serves as a reward function, reinforcing actions that promote well-being, survival, or social bonding. Fear, much like a negative reinforcement signal, adjusts behaviour to avoid dangerous situations, ensuring better survival odds in an unpredictable environment. However, unlike traditional reinforcement learning models, emotions do not operate in a static, closed-loop system—they are dynamic, context-dependent, and shaped by evolutionary pressures over millions of years. While AI often relies on explicit programming and numerical reward values, emotions have been finely tuned to adapt not just to an immediate environment but also to complex, uncertain, and infinitely variable real-world conditions— allowing organisms to navigate a reality that no pre-programmed algorithm can currently do.
Stochastic adaptation didn't just start when the first Homo habilis came about (commonly known as the first human species)—stochastic adaptation is fundamental to life itself, perhaps even fundamental to reality. Even bacteria have something that AI doesn't have: stochastic adaptation. Even the simplest life forms have evolved mechanisms to handle randomness and uncertainty. A bacterium in an unpredictable environment can mutate, adapt, and survive in ways that AI, no matter how advanced, is currently incapable of doing. Biological organisms have a fundamental connection to reality that perhaps AI does not yet have.
The Future of Artificial Intelligence
However, one could argue that speed is a form of intelligence itself. If we're defining intelligence as the ability to solve complex problems and generalise onto novel tasks—then perhaps it doesn't matter what mechanism is used to achieve that. So, this is partially valid—I say partially since the generalisation criteria cannot fully be satisfied even with speed. Scientists such as Max Tegmark, author of Life 3.0, has argued that building fully general, potentially uncontrollable AI may be unnecessary—and that narrow, partially general AI systems may be functionally sufficient. I somewhat align with Tegmark's view, because intelligence introduces control and the potential for deception.
AI, in its current form, operates at electronic speed, excelling at solving highly complex problems at speeds orders of magnitude beyond human capability— which gives it an undeniable form of functional intelligence. A superintelligent AI might, in theory, surpass human cognition not by thinking "better" but by thinking "faster"—cycling through solutions so quickly that the distinction between speed and intelligence becomes blurred. However, as mentioned, generalisation capabilities have yet to be achieved; which is a requirement for many tasks (some tasks require subtasks to be completed— and these subtasks may require generalisation)
We must also acknowledge how generalisation works in humans. Most individual humans, within their own isolated context, are not particularly intelligent by any abstract metric. However, collectively, humans are capable of astonishing feats—civilisation, technology, medicine, art—through a process of distributed specialisation. Humans don't generalise as individual agents; we generalise as a collective of specialised agents—each person mastering specific domains and contributing to a larger, adaptive intelligence system. In the same way, perhaps AI might achieve generalisation not through a single monolithic entity, but through the integration of many specialised systems working in coordination. That said, if we're referring to a singular AI agent tasked with generalising across domains in real-time—mirroring human-like flexibility—then mere speed is insufficient. Stochastic adaptation becomes a fundamental requirement. Even within a single AI system, some subsystems will require the ability to act under uncertainty, recombine heuristics on the fly, and generate novel strategies when deterministic ones fail. Without this capacity, speed can only simulate intelligence within predictable boundaries—it cannot replicate the deeper adaptability we see in biological cognition.
Thus, while speed enables AI to solve problems in an almost superhuman way, it fails to autonomously reframe problems in an open-ended, unpredictable world. Speed alone can serve as a substitute for intelligence within a closed system, but true intelligence—one that thrives in infinite, stochastic environments—does not necessarily need to mimic human-like stochastic adaptation; rather, it requires something beyond pure speed.
Interestingly, the paradox at the core of AI development is this: for an AI to be considered effective and usable, we must be able to train, test, and predict its performance. But the very ability to predict and train it inherently means it does not reflect the stochastic, unpredictable nature of reality. Evolution—the most powerful optimisation algorithm known—does not function within predefined benchmarks or controlled environments. It cannot be fully tested, predicted, or constrained; it thrives in an infinite, chaotic space where randomness, failure, and emergence are not just possibilities but fundamental drivers of progress.
This presents a fundamental limitation. If we accept that intelligence must include the ability to navigate genuine uncertainty—to adapt in environments that cannot be reduced to closed systems—then any intelligence trained solely within deterministic frameworks may lack something essential. As long as AI systems do not embody stochastic adaptation as a core mechanism, there may always be a layer of cognition or intelligence they cannot replicate. Predictability may enable speed and precision, but it may never fully recreate the deep adaptability that evolution has already mastered.
The deeper issue is that we're building AI under the assumption that reality itself is computable. If reality were something we could fully compute and predict, then training AI in deterministic, controlled frameworks might make sense. So if we continue designing AI as if the world is neat and computable, we may miss what intelligence really demands: the ability to thrive in a world that defies full prediction. However, practically speaking, since we can't compute reality in its full complexity, training AI within deterministic frameworks remains our only viable option—for now.
Perhaps there exists an undiscovered algorithm, maybe one more practical than AIXI (as proposed by Marcus Hutter), which remains largely theoretical— one capable of not only bypassing billions of years of evolution, but practically computing stochastic adaptation. While we should never underestimate the sheer magnitude of what evolution has accomplished, it's possible that intelligence, in its purest form, can be abstracted and engineered. If we can bypass evolution and construct a system that can genuinely adapt to uncertainty, not just simulate it, we will have achieved one of the greatest milestones in the history of science and technology. Speed is still a massive factor that can drive AI progress. However, when it comes to AI being truly smart (satisfying the definition of intelligence, while being able to navigate a stochastic, potentially incomputable reality), I think that's something we'll probably be waiting a little while for—though active research is underway; I speculate this will happen within the next 20 years— not within 5 years (or now) as many suggest. I do believe that in the next several years, we may begin to see signs of AI developing a form of emotion—not emotion in the phenomenological sense, not the post hoc rationalisation kind, but rather emotion as a set of heuristics that allow the system to navigate an uncertain state space.
And yet, pulling this off would come with existential implications. The moment we are able to approximate the stochastic and uncertain nature of reality within an artificial entity, perhaps it no longer becomes artificial. We begin to blur the line between artificial intelligence and the act of creating life itself. It raises the question: at what point does a machine stop being a tool—and start becoming something more?