A Short Essay: How Self-Organising Systems Abstract 'Meaning'

Published: 14th February 2025

How we think about the concept of 'meaning' and 'purpose' is deeply flawed, and often treated as an abstract, irreducible phenomenon. This is especially evident when people project meaning onto the universe, mistaking human cognitive constructs for intrinsic properties of reality, a system that operates independently of subjective intent. SO, asking the question of what the meaning or purpose of the universe is a deeply flawed question. Meaning is not an intrinsic property of reality but an emergent computational mechanism by which self-organising systems reduce uncertainty and increase stability.

In physical and biological systems, meaning emerges as a function of order from disorder. At the most basic level, physical systems operate under entropy minimisation—a tendency to self-organise into states that maximise stability while reducing uncertainty. The Free Energy Principle, formulated by Karl Friston, describes how biological systems—including the brain—function by reducing free energy, which represents uncertainty about the external world. The drive to minimise uncertainty leads organisms to structure their environment in predictable ways, from simple reflex actions to complex and abstract frameworks like language, culture, and morality.

Imposing meaning isn't just this conscious decision; the fact you see, hear, smell, touch, and feel are methods by which your brain assigns meaning to information within reality. Colour does not exist as an objective feature of reality—it is how the brain interprets different wavelengths of light; different organisms will create meaning out of the wavelengths of light differently from how humans do so. Similarly, sound is a structured interpretation of vibrational waves, and solidity is a mental construct derived from electromagnetic interactions at the atomic level. Assigning meaning to abstract concepts such as 'ideas', or the creation of Gods, is a higher-order abstraction of fundamental meaning assignment that allows self-organising systems (ourselves) to reduce higher-order uncertainty.

I align with a somewhat speculative information-theoretic and computationalist perspective on cognition— the idea that abstract concepts can be mathematically described. Every thought, idea, and belief is represented by dynamic configurations of neural manifolds and high-dimensional vector spaces. Minimising uncertainty at the 'idea/belief level' is not just about acquiring knowledge but about reorganising neural configurations into more stable, computationally efficient states. When encountering new information (e.g. new ideas), the brain undergoes a temporary increase in neural entropy, as previously stable predictive models must be updated or restructured. This state of uncertainty, if left unresolved, can be cognitively destabilising. To counteract this, the brain employs emotions as a heuristic mechanism—a rapid, evolutionarily developed tool for reducing higher-order abstract uncertainty. Emotions are not arbitrary or mystical constructs; rather, they serve as adaptive regulatory functions that help stabilise cognition under uncertainty. Just as the brain minimises prediction error in sensory processing, Emotions function as Bayesian heuristics, optimising for faster decision-making by leveraging prior experiences to predict and minimise future uncertainty. However, emotional heuristics, while useful for short term, survival-related decisions, can also introduce systematic biases when over-relied upon, espcially since they were not evolutionarily optimised for highly uncertain abstract concepts (see Survival Rationality vs Abstract Rationality). Emotional biases can lead us to anthropomorphising the uncertain. Anthropomorphic bias is essentially viewing reality through a human lens and assigning abstract human concepts such as 'desires' and 'intentionality' to non-human objects or systems. It's analogous to how an octopus would view reality through an octopus lens; it exists as an octopus, therefore it sees and assigns meaning to non-octopus systems in an 'octopussy' way.

Yes, confronting reality beyond our emotional biases can be existentially uncomfortable, but anthropomorphising it only obscures deeper understanding. I'd argue that true existential comfort comes not from projecting meaning onto reality, but from understanding the system we exist within—objectively, beyond the biases of our own existence.