Artificial Intelligence Is Not Yet Smart, But It's Fast

Published: March 2025
"The map is not the territory." – Alfred Korzybski

The Limitations of Current AI

AI is not yet smart, not dumb either— it's just fast. Though it's rapidly improving, we've barely tapped the surface. AI, in its current state, is actually quite limited; it cannot reason well. It thrives on brute-force statistical learning, relying on data and computation, rather than flexible reasoning and novel generalisation.

To understand this limitation mathematically, consider how modern AI systems learn through gradient descent optimisation—essentially, how they adjust their internal parameters to improve performance:

$$\theta_{t+1} = \theta_t - \alpha \nabla_\theta \mathcal{L}(\theta_t)$$

This equation shows how an AI system updates its parameters $\theta$ (the neural network weights) by moving in the direction that reduces error, where $\alpha$ is the learning rate (how big steps to take) and $\nabla_\theta \mathcal{L}$ is the gradient (which direction reduces error most). The loss function $\mathcal{L}(\theta) = \frac{1}{N} \sum_{i=1}^{N} \ell(f_\theta(x_i), y_i)$ measures how wrong the AI's predictions are across $N$ training examples, where $N$ often needs to be millions of examples for decent performance.

Humans, on the other hand, use heuristic-based learning—heuristics that have been refined through emotions and billions of years of evolution. Our brains do not merely analyse vast datasets and optimise probabilities; rather, they have developed mechanisms to infer meaning, adapt quickly, and make decisions based on limited and often ambiguous information. Human intelligence is deeply intertwined with emotional and experiential learning, allowing us to navigate an unpredictable world with relative ease.

The human brain operates more like a Bayesian system with extremely strong evolutionary priors. Bayes' theorem shows how we update beliefs when we get new information: $P(\text{hypothesis}|\text{data}) = \frac{P(\text{data}|\text{hypothesis}) \cdot P(\text{hypothesis})}{P(\text{data})}$. The key difference is that $P(\text{hypothesis})$—our prior beliefs before seeing data—comes from millions of years of evolution, allowing us to learn rapidly from minimal examples because we already "know" a lot about the world.

graph TD subgraph Internal["Internal Predictive Models"] A["Prior Beliefs P(hypothesis)"] B[Prediction Generation] C[Error Calculation] D[Belief Updates] A --> B B --> C C --> D D --> A end subgraph External["External Environment"] E["Sensory Input Real Data"] F[Environmental Complexity] G[Unpredictable Events] end E --> C F --> G G --> E C --> H[Learning Signal] H --> D I[Human Strong Evolutionary Priors] -.-> A J[AI Weak No Priors] -.-> C style A fill:#e8f5e8 style B fill:#e6f3ff style C fill:#ffe6f3 style D fill:#fff2cc style E fill:#ffebcd style F fill:#f0f0f0 style G fill:#ffebee style H fill:#e6ffe6 style I fill:#d4edda style J fill:#f8d7da

Bayesian brain dynamics: Human cognition uses strong evolutionary priors to rapidly update beliefs, while AI systems require massive datasets to approximate similar predictive capabilities.

AI is like a monster that constantly needs to be fed. Before it can act, it needs to consume massive amounts of data. It doesn't "learn" from one-shot experiences the way humans or even basic animals do. It needs to grind through thousands, sometimes millions, of examples just to approximate what a child can generalise after a few tries. And even then, once it reaches a level of competence, it has to be fed again—more data, more fine-tuning, more compute.

I think even the most cutting-edge AI systems today—whether using reinforcement learning, large-scale unsupervised training, self-supervised objectives, transfer learning, or pseudo-randomness in simulated environments—still lack something fundamental. There's a level of connection to reality that biological organisms have that AI doesn't. That connection isn't just about inputs and outputs; it's about being embedded in the chaos of real, unpredictable environments.

This can be understood through information theory. Entropy $H(X) = -\sum_{i} P(x_i) \log P(x_i)$ measures the unpredictability or randomness in a system—the higher the entropy, the more surprises the environment can throw at you. Real environments have essentially unbounded entropy that approaches infinity, while AI training environments have artificially constrained entropy where $H(X)$ is limited by human engineering.

You can simulate randomness, sure, but it's still bounded by code. It's still detached. AI can be probabilistic, but it doesn't live in uncertainty the way organisms evolved to.

Stochasticity and Emotion in Evolutionary Intelligence

Key word—Stochasticity. Evolution is fundamentally tuned for randomness and uncertainty, while AI systems are built to thrive in structured, rule-based environments. They excel where variables are known and predictable—but struggle in truly open-ended scenarios, where adaptation must emerge on the fly.

Evolution operates as a stochastic optimisation algorithm that can be simplified as: $f_{t+1} = f_t + \mu \cdot \mathcal{N}(0, \sigma^2) + S(f_t, E_t)$. This equation shows how fitness $f_t$ changes over generations through random mutations (the $\mu \cdot \mathcal{N}(0, \sigma^2)$ term representing random genetic variation) plus selection pressure $S(f_t, E_t)$ from the environment. Crucially, the environment $E_t$ itself is constantly changing and unpredictable.

Rolling dice illustration
Evolution thrives on genuine randomness while AI operates in predictably bounded environments.

I strongly believe that many people underestimate evolution. Many people think of it as a slow, linear process—something that took billions of years to shape life as we know it. Evolution is the only algorithm that has ever successfully adapted to the fundamental stochastic nature of reality while simultaneously operating within its infinite state space. Unlike AI, which is often constrained by predefined objectives and finite datasets, evolution explores an unbounded optimisation landscape, continuously refining solutions without direction or external input. Reality itself presents an infinite array of possible states, yet evolution has not only navigated this uncertainty but has actively thrived within it, producing life forms capable of adapting to unpredictable environments. No engineered system has come close to matching this level of adaptability—AI operates in closed, controlled spaces, whereas evolution dynamically selects for survival in an ever-changing, boundless reality.

One of the most profound outcomes of this optimisation process is emotion. Aligning with Daniel Dennett's illusionism, I speculate that the only reason we perceive emotions as "emotions" is because of the post hoc feeling we associate with them after they arise. They are not just internal experiences but heuristics for navigating infinite state spaces, enabling organisms to make rapid decisions without needing to calculate every possible outcome.

Emotions can be understood as biological value functions—mathematical representations of how good a situation is for long-term survival. The value function $V^\pi(s) = \mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^t R_{t+1} | S_0 = s\right]$ calculates the expected total future reward from any state $s$, where $\gamma$ is a discount factor (future rewards matter less than immediate ones) and $R_{t+1}$ represents rewards over time. Emotions compress these complex calculations into rapid gut responses, operating in infinite state spaces $|\mathcal{S}| \to \infty$ unlike constrained AI systems.

Humans compress vast amounts of information into an intuitive response, allowing for adaptability in ways that even the most advanced AI systems struggle to replicate. Without emotions, every decision would require an exhaustive evaluation of all variables, rendering real-time action impossible in an unpredictable world.

I speculate that a "feeling" likely is what motivates us to take an action rather than the feeling existing prior to the action itself. In other words, emotions are not merely passive experiences but functional drivers of behaviour, arising as evolutionary shortcuts to steer actions in ways that promote survival. The sensation of fear, for instance, is not just an arbitrary state of discomfort—it is the most efficient algorithm for avoiding threats in an uncertain reality. Complex emotions are arguably the most advanced algorithm that evolution has ever produced. They enable humans to process uncertainty, weigh risk and reward, and engage in social cooperation far beyond what brute-force intelligence could achieve. Emotions are, in many ways, one of the pinnacles of evolutionary computation, allowing organisms to thrive in an infinite state space where logic alone would be insufficient.

Reinforcement learning is a machine learning technique where an agent (e.g. human) learns to make decisions in an environment to maximise a reward signal, through trial and error and feedback from the environment. Emotions likely operate much like a reinforcement learning algorithm, guiding behaviour through a system of rewards and penalties that shape decision-making over time.

The Bellman equation is fundamental to understanding optimal decision-making: $Q^*(s,a) = \mathbb{E}[R_{t+1} + \gamma \max_{a'} Q^*(S_{t+1}, a') | S_t = s, A_t = a]$. This equation says that the value of taking action $a$ in state $s$ equals the immediate reward $R_{t+1}$ plus the discounted value of the best future action. Emotions can be seen as biological approximations of this: fear estimates negative expected outcomes $-\mathbb{E}[\text{danger}|s,a]$, while joy estimates positive survival rewards.

The phenomenological experience of an emotion—feeling anger, sadness, joy, or fear—can be thought of as a feedback mechanism similar to how reinforcement learning agents adjust their actions based on reward signals. Just as a reinforcement learning model maximises expected rewards and minimises penalties to optimise its future decisions, emotions act as internal signals that reinforce beneficial behaviours and discourage harmful ones. For example, anger functions as a penalty signal, driving an organism to respond to perceived injustice or threats, thereby increasing the likelihood of defending resources or social standing. Happiness, on the other hand, serves as a reward function, reinforcing actions that promote well-being, survival, or social bonding. Fear, much like a negative reinforcement signal, adjusts behaviour to avoid dangerous situations, ensuring better survival odds in an unpredictable environment. However, unlike traditional reinforcement learning models, emotions do not operate in a static, closed-loop system—they are dynamic, context-dependent, and shaped by evolutionary pressures over millions of years. While AI often relies on explicit programming and numerical reward values, emotions have been finely tuned to adapt not just to an immediate environment but also to complex, uncertain, and infinitely variable real-world conditions— allowing organisms to navigate a reality that no pre-programmed algorithm can currently do.

Stochastic adaptation didn't just start when the first Homo habilis came about (commonly known as the first human species)—stochastic adaptation is fundamental to life itself, perhaps even fundamental to reality. Even bacteria have something that AI doesn't have: stochastic adaptation. Even the simplest life forms have evolved mechanisms to handle randomness and uncertainty. A bacterium in an unpredictable environment can mutate, adapt, and survive in ways that AI, no matter how advanced, is currently incapable of doing. Biological organisms have a fundamental connection to reality that perhaps AI does not yet have.

The Future of Artificial Intelligence

However, one could argue that speed is a form of intelligence itself. If we're defining intelligence as the ability to solve complex problems and generalise onto novel tasks—then perhaps it doesn't matter what mechanism is used to achieve that. So, this is partially valid—I say partially since the generalisation criteria cannot fully be satisfied even with speed. Scientists such as Max Tegmark, author of Life 3.0, has argued that building fully general, potentially uncontrollable AI may be unnecessary—and that narrow, partially general AI systems may be functionally sufficient. I somewhat align with Tegmark's view, because intelligence introduces control and the potential for deception.

AI, in its current form, operates at electronic speed, excelling at solving highly complex problems at speeds orders of magnitude beyond human capability— which gives it an undeniable form of functional intelligence. A superintelligent AI might, in theory, surpass human cognition not by thinking "better" but by thinking "faster"—cycling through solutions so quickly that the distinction between speed and intelligence becomes blurred.

This speed advantage is substantial: human neurons fire at roughly 200 Hz (200 times per second) while modern AI processors operate at GHz frequencies (billions of times per second)—roughly $5 \times 10^6$ times faster. This massive speed difference potentially allows AI to brute-force explore solution spaces that humans simply cannot access due to biological processing limitations.

graph LR subgraph Human["Human Intelligence"] H1["Biological Neurons 200 Hz"] H2["Evolutionary Priors Millions of years"] H3["Stochastic Adaptation Real uncertainty"] H4["One-shot Learning Few examples needed"] H5["Embodied Experience Physical grounding"] H2 --> H4 H3 --> H4 H5 --> H3 H1 -.-> H4 end subgraph AI["AI Intelligence"] A1["Electronic Processors GHz frequencies"] A2["Statistical Learning Massive datasets"] A3["Deterministic Logic Bounded environments"] A4["Data-hungry Training Millions of examples"] A5["Digital Abstraction No physical grounding"] A2 --> A4 A3 --> A4 A5 --> A3 A1 -.-> A4 end H4 -.-> G["Real-world Adaptation"] A4 -.-> G style H1 fill:#e8f5e8 style H2 fill:#d4edda style H3 fill:#e6f3ff style H4 fill:#fff2cc style H5 fill:#e6ffe6 style A1 fill:#ffebee style A2 fill:#f8d7da style A3 fill:#ffe6f3 style A4 fill:#fff3cd style A5 fill:#f0f0f0 style G fill:#ffebcd

However, as mentioned, generalisation capabilities have yet to be achieved; which is a requirement for many tasks (some tasks require subtasks to be completed— and these subtasks may require generalisation). We must also acknowledge how generalisation works in humans. Most individual humans, within their own isolated context, are not particularly intelligent by any abstract metric. However, collectively, humans are capable of astonishing feats—civilisation, technology, medicine, art—through a process of distributed specialisation.

Humans don't generalise as individual agents; we generalise as a collective of specialised agents—each person mastering specific domains and contributing to a larger, adaptive intelligence system. In the same way, perhaps AI might achieve generalisation not through a single monolithic entity, but through the integration of many specialised systems working in coordination. That said, if we're referring to a singular AI agent tasked with generalising across domains in real-time—mirroring human-like flexibility—then mere speed is insufficient. Stochastic adaptation becomes a fundamental requirement. Even within a single AI system, some subsystems will require the ability to act under uncertainty, recombine heuristics on the fly, and generate novel strategies when deterministic ones fail. Without this capacity, speed can only simulate intelligence within predictable boundaries—it cannot replicate the deeper adaptability we see in biological cognition.

Thus, while speed enables AI to solve problems in an almost superhuman way, it fails to autonomously reframe problems in an open-ended, unpredictable world. Speed alone can serve as a substitute for intelligence within a closed system, but true intelligence—one that thrives in infinite, stochastic environments—does not necessarily need to mimic human-like stochastic adaptation; rather, it requires something beyond pure speed.

Interestingly, the paradox at the core of AI development is this: for an AI to be considered effective and usable, we must be able to train, test, and predict its performance. But the very ability to predict and train it inherently means it does not reflect the stochastic, unpredictable nature of reality.

This presents a fundamental limitation. If we accept that intelligence must include the ability to navigate genuine uncertainty—to adapt in environments that cannot be reduced to closed systems—then any intelligence trained solely within deterministic frameworks may lack something essential. As long as AI systems do not embody stochastic adaptation as a core mechanism, there may always be a layer of cognition or intelligence they cannot replicate. Predictability may enable speed and precision, but it may never fully recreate the deep adaptability that evolution has already mastered.

Evolution—the most powerful optimisation algorithm known—does not function within predefined benchmarks or controlled environments. It cannot be fully tested, predicted, or constrained; it thrives in an infinite, chaotic space where randomness, failure, and emergence are not just possibilities but fundamental drivers of progress.

The deeper issue is that we're building AI under the assumption that reality itself is computable. If reality were something we could fully compute and predict, then training AI in deterministic, controlled frameworks might make sense. So if we continue designing AI as if the world is neat and computable, we may miss what intelligence really demands: the ability to thrive in a world that defies full prediction. However, practically speaking, since we can't compute reality in its full complexity, training AI within deterministic frameworks remains our only viable option—for now.

Perhaps there exists an undiscovered algorithm, maybe one more practical than AIXI (as proposed by Marcus Hutter), which remains largely theoretical— one capable of not only bypassing billions of years of evolution, but practically computing stochastic adaptation.

Marcus Hutter
Marcus Hutter, creator of AIXI—the theoretical framework for optimal artificial intelligence.

While we should never underestimate the sheer magnitude of what evolution has accomplished, it's possible that intelligence, in its purest form, can be abstracted and engineered. If we can bypass evolution and construct a system that can genuinely adapt to uncertainty, not just simulate it, we will have achieved one of the greatest milestones in the history of science and technology. Speed is still a massive factor that can drive AI progress. However, when it comes to AI being truly smart (satisfying the definition of intelligence, while being able to navigate a stochastic, potentially incomputable reality), I think that's something we'll probably be waiting a little while for—though active research is underway; I speculate this will happen within the next 20 years— not within 5 years (or now) as many suggest. I do believe that in the next several years, we may begin to see signs of AI developing a form of emotion—not emotion in the phenomenological sense, not the post hoc rationalisation kind, but rather emotion as a set of heuristics that allow the system to navigate an uncertain state space.

And yet, pulling this off would come with existential implications. The moment we are able to approximate the stochastic and uncertain nature of reality within an artificial entity, perhaps it no longer becomes artificial. We begin to blur the line between artificial intelligence and the act of creating life itself. It raises the question: at what point does a machine stop being a tool—and start becoming something more?