Abraham Dada

Emotions Aren't What We Think They Are

Published: 13th April 2025

“Emotion is nature's compression algorithm—what intelligence runs when full calculation is impossible.”

Emotions are not metaphysical. At their core, emotions are heuristics: functional shortcuts that help agents navigate an effectively infinite state space under bounded computational resources.

Emotions are not what we think they are. Even bacteria (I speculate) likely have emotions— not emotions in the human phenomenological sense (e.g. I "feel" angry today), but emotions in the sense that they share some of the same fundamental heuristics that allow more efficient decision-making— a kind of primitive adaptive signal. A bacterium moving toward glucose and away from toxins is functionally executing a behaviour we'd analogise to pleasure and aversion. These are not "emotions" in the conscious, reflective sense—but they are built on the same principle: local heuristics that steer behaviour in uncertain environments.

The key distinction is the feedback loop. Humans feel emotions because we're able to recursively process and evaluate internal states. We don't just experience a heuristic—we experience the experience of it. That recursion adds narrative and subjective colour, but it doesn't change the fundamental role: emotion is about efficient decision-making under uncertainty.

Emotions, such as happiness, isn't what drives us—it's what follows. We often say we pursue things because they make us happy, but that's a linguistic inversion of what's actually happening. When we say, "Fear made me run," we're mistaking the summary report (fear) for the algorithm itself (threat-avoidance heuristic). The real driver is the underlying heuristic—an internal mechanism evolved to guide us through complex environments. Happiness is simply the feedback loop; it's the brain's way of saying, "This action aligned with what promotes survival or long-term reward—do more of this." But language assigns causality to the feeling, not the mechanism. Over time, we start to believe the narrative: that happiness is the cause of action, when in reality, it's the effect. This misattribution matters, because it keeps us blind to the systems beneath our conscious experience. We don't act because we're happy—we become happy because the system we're embedded in, biologically and cognitively, reinforces the action. Language just packages the feedback in a way we can tell ourselves a story.

If we want AI to have emotions—emotions in the phenomenological sense—we need to make a critical distinction. Emotions as heuristics that compress a vast state space are certainly possible. In fact, we're already seeing this in cutting-edge research: AI systems can adaptively prioritise actions under uncertainty, much like biological agents do. However, if we're talking about emotions in the human phenomenological sense—that is, what it feels like to experience joy, fear, or love—then that will not be possible as long as AI remains grounded in discrete logic. It's true that, in theory, a Turing-complete system could emulate every neuron, glial cell, hormonal fluctuation, and the full set of interdependencies that give rise to affective consciousness. But even that level of brute-force approximation is computationally infeasible with current architectures. And more importantly, it still wouldn't guarantee subjectivity—because phenomenology may not be computational at all. It may be emergent from physical embodiment, not just structural emulation. The phenomenological type of emotion is deeply tied to biological stochasticity, recursive embodiment, and the material conditions of life itself—none of which AI currently possesses. So while emotion-like heuristics are within reach, genuine phenomenological emotion remains, at least for now, out of scope.