top of page
Search

The Optimisation Trap (Part 1): Optimisation Is Not a Choice

  • Writer: Glenn
    Glenn
  • 11 hours ago
  • 10 min read
Cartoon of five people at desks with computers, surrounded by money stacks. Graphs on walls show upward trends. Bright, vibrant setting.


The assumption we never question


Most people talk about optimisation as if it is a lifestyle preference, like minimalism, or running, or going vegan. As though society could simply decide to calm down if it wanted to, and that our collective obsession with speed, convenience, efficiency and frictionless living is the product of a long meeting we all attended and politely agreed to. But optimisation is not a policy, it is not a project, and it is not optional. It behaves less like a technique we employ, and more like a weather system we live inside, one that moves according to pressures we do not control, and one that keeps reorganising the landscape whether we approve of it or not.


This matters, because optimisation is one of the most powerful forces shaping the future, and the reason it is so hard to deal with is because people keep misidentifying it. We treat it like a choice, and then we act surprised when it refuses to behave like one. We try to vote it out. We try to morally scold it. We try to regulate it into politeness. And yet it keeps going, slipping through whatever barriers we attempt to place in front of it, because the engine that drives it is not a handful of bad actors, or a particular ideology, or even a set of technologies. The engine is us, operating at scale, doing what intelligent organisms do when competition exists and alternatives can be imagined.


Optimisation is the thing that got us here. It has given us comfort, longer lives, safety, abundance, and the capacity to reduce suffering in ways that would have looked miraculous to almost anyone who lived before the last couple of centuries. But it is also doing something else at the same time. It is diverting attention, eroding contentment, fragmenting community, and quietly encouraging a kind of hollow efficiency that looks like progress from the outside while it feels increasingly wrong from the inside. So if we want to understand why modern life feels the way it does, and why the next wave of technology - especially AI - is going to make the tension sharper rather than softer, then we need to start with the uncomfortable truth that sits under everything else.


Optimisation is not a choice. It is a force that emerges once intelligence arrives.



Optimisation as an emergent property


At the biological level, optimisation is essentially what intelligence does. Brains evolved to increase survival odds by reducing wasted effort, by improving prediction, and by replacing crude trial and error with more efficient strategy. The core mechanism is brutally simple. If something works, we repeat it. If something works better, we prefer it. And if something saves time, energy, or resources, we adopt it. Not because we are ideologically committed to efficiency, but because pattern recognition drives us towards whatever reduces friction between where we are and where we want to be.


This means optimisation is not an invention in the usual sense. It is not something humans created like the wheel or the printing press. It is something humans express, like language, or tool use, or story. It is a behaviour that appears whenever a system is capable of learning and adapting. Zoom out far enough and you start to see optimisation as a natural phenomenon, the way you see erosion in landscapes or the way rivers carve paths through land. Not because the river has a plan, but because the physics of the system makes certain outcomes inevitable.


Any group capable of competition will chase advantage, because advantage compounds into security, status, safety, access, and survival. Any civilisation capable of abstraction will change its environment to suit itself, because the ability to imagine alternatives makes the present feel inefficient. And the moment the present feels inefficient, a pressure builds to remove whatever is in the way. That pressure does not have to be conscious to be powerful - it only has to exist.


This is why attempts to slow optimisation down rarely work, even when the motivation is sensible. If one group pauses, another will not. If one company chooses restraint, another will domionate the market. If one nation decides to hold back, another will push forward, and in a world where power, defence, and economic stability are tied to advancement, holding back does not feel noble, it feels dangerous. Even if an entire generation collectively agrees that enough is enough, the pressure does not disappear, it relocates, because optimisation is not attached to a single actor. It is distributed and systemic. It's what happens when millions of local decisions add up to a global direction of travel.


The only moments in history where optimisation genuinely pauses are during collapse, wars, pandemics, breakdowns in infrastructure, and sudden disruptions that force survival to replace improvement as the priority. But that is not a brake, it's a crash. Once stability returns, optimisation reappears almost immediately, not as a choice, but as a default setting that reasserts itself as soon as it can.


So if optimisation behaves like this - as an emergent force - the obvious next question is why it seems to accelerate so aggressively in the modern world, rather than drifting forward slowly. And this is where two key accelerators enter the story.



Why capitalism and power supercharge it


Even if optimisation is a biological and cognitive imperative, some human systems act like accelerants, and two of the biggest are capitalism and power. Not because they are villains in a neat moral narrative, but because they are structures that reward efficiency and punish restraint. Once a structure begins doing that, it does not need malicious intent to produce destructive outcomes. It only needs a mechanism that compounds advantage.


Competitive markets reward anyone who can make things faster, cheaper, smoother, more convenient, more scalable. If you optimise and your competitor does not, you win. If you don't, you lose. And because winning tends to produce more resources, more resources tend to produce better tools, and better tools increase efficiency even further, you get a loop that feeds itself. This is not a gentle curve - it's compounding and exponential.


Meanwhile power structures lock optimisation into place. Governments optimise systems to maintain control, institutions optimise processes to preserve relevance, corporations optimise supply chains, labour, attention, behaviour. Optimisation becomes the language of legitimacy, because efficiency signals competence. And in a world where complexity is overwhelming, the appearance of competence is a kind of power in itself. So optimisation does not just improve outcomes, it becomes infrastructure. Once it is infrastructure, reversing it stops being a moral question and becomes a stability question. Institutions do not like stability questions, because stability questions are a threat to their existence.


This is why moral appeals to slow down rarely work at the system level. You can ask individuals to resist convenience, but you cannot ask a competitive system to stop competing. You can ask a corporation to deprioritise efficiency, but if doing so reduces profit, makes them less competitive, and weakens their position, then the request becomes a request for self harm. This is not how systems behave. Telling optimisation to stop is like asking rabbits not to breed - not because rabbits are immoral, but because the biology and incentives of the system make the outcome inevitable.


This does not mean people cannot be greedy, short-sighted, or exploitative. Clearly they can. It simply means you do not need those qualities for optimisation to bulldoze through culture, community, and wellbeing. You only need a system where advantages compound and where friction is treated as something to be eliminated without asking what friction was quietly doing for us in the first place.


And this brings us to the part of optimisation that almost nobody discusses until it's too late.



The threshold we always cross too late


The mistake we keep making is assuming optimisation is linear. We see it improve life in the early stages and we conclude that more of it will continue to improve life further down the line. We act as though optimisation is a volume knob that can be turned up forever, and that the only risk is failing to optimise enough. But optimisation has a threshold, and beyond that threshold the benefits begin to retract while the side effects compound.


Up to a point, optimisation is essential. It is the reason we have clean water systems, safer travel, medicine, reliable food supply chains, heating, shelter, communication networks, and the capacity to reduce suffering at scale. Cars replaced horses and expanded mobility, safety, and trade. Manufacturing lowered costs and improved access to goods that used to be reserved for a minority. Early social platforms reconnected people who had drifted apart. If you're old enough to remember the early internet, you may remember a brief window where digital connection felt genuinely enriching, not because it was perfect, but because it had not yet been industrialised.


But then something happens. Cars do not just give us freedom. They give us pollution, congestion, and background noise. Cities begin slowly reorganising themselves around traffic rather than people. Roads widen, pedestrian space shrinks, commuting becomes normal, and the world subtly reshapes itself around the needs of the machine rather than the human body. Similarly, social networks do not just connect us. They monetise attention, reward outrage, turn identity into data, and convert human interaction into a resource extraction process. The goal quietly shifts from connection to engagement, because engagement can be measured. And unfortunately, because what can be measured can be optimised, everything that can be optimised will be.


The crucial error is assuming that because optimisation was good at one level, it must be even better at a higher level. But systems don't behave that way. Once optimisation passes a certain point, it starts destroying things that are not directly linked to its goals, often destroying the very things that make life feel worth living. Things like meaning, community and a sense of rhythm. The texture of effort, the satisfaction of delayed reward, and the quiet bonding that happens when people share time and experience rather than merely exchange information.


And by the time the costs become obvious, the system is already too embedded to reverse. The infrastructure has been built, habits have formed, incentives have hardened, and the baseline has shifted. This is the part people miss when they talk about progress as though it is always self-correcting. Some changes cannot be easily undone because the world reorganises itself around them. And once that happens, going backwards stops being an option and becomes a fantasy.


It is difficult to even spot the threshold as you cross it, because the early benefits feel so good that they mask the later damage. Optimisation is seductive. It offers relief today, and humans are wired to repond to immediate benefits rather than to value abstract future harms. Which brings us to a question that inevitably follows...


If optimisation is inevitable, if it accelerates through capitalism and power, and if it crosses a threshold that we notice only after the fact, then what is the point in even talking about it?



Living inside an unstoppable system


Unfortunately, at a global level, there is not much we can do to stop optimisation. You cannot meaningfully direct an entire civilisation out of a force that emerges from competition, intelligence, and compounding advantage. But at the personal level, things become more interesting, and this is where people often misunderstand the power of own agency. Because while we cannot stop optimisation, we can decide where we let it touch our lives. And that decision matters far more than it might initially seem.


We can choose friction deliberately. We can embrace slowness where speed erodes meaning. We can resist certain forms of convenience - not because convenience is inherently bad, but because history teaches us that meaningful effort makes us happier. And because the absence of friction does not automatically produce contentment. In fact, the removal of friction often removes the very conditions that allow contentment to form, because contentment is not merely the presence of comfort. It is the presence of fulfilment, purpose, rhythm, and the satisfaction of earned outcomes.


This is why choosing difficulty today for benefit tomorrow is so hard, and also why it is so important. Our brains evolved to remove problems now, not to sit with discomfort in exchange for later reward. That is why dieting is hard, saving is hard, and learning is hard. Building everything worthwhile involves discomfort at first, but optimisation offers a tempting alternative, because it promises relief without cost. Meanwhile, there is a hidden cost. It just arrives later, in the form of a sense of diminished agency, reduced satisfaction, and a lesser sense of self.


Importantly, AI threatens to become a serious amplifier of optimisation. Not because AI is a 'bad technology', but because it is a machine built around efficiency, pattern recognition, prediction, and friction removal. In other words, it is the purest expression of optimisation we have ever created. Used carelessly, it will optimise the living daylights out of everything people do. And in the places where people allow it to replace effort entirely, it will likely worsen the feelings of emptiness that modern life already produces. But used wisely, it could also help us see optimisation more clearly, because it may allow us to recognise the shape of systems, the patterns of acceleration, and the subtle ways our lives are being streamlined into something smooth, but misaligned.


Right now, we are not in a dystopia. Most people are relatively safe, fed, and connected, and many aspects of life are incredibly functional. But we are also not in a utopia either, because many people are sensing that life feels off, and that feeling should not be dismissed as a trivial complaint. It's a sign. A signal that life has become efficient while simultaneously less satisfying. An indication that we have created a world in which everything works and yet many people feel anxious, unfulfilled, and oddly disconnected - even though most are constantly connected.


So instead of questioning whether optimisation is good or bad, we should focus on where it belongs. Because while we did not choose the world that optimisation has produced, we can still take responsibility for how we live inside it. And once you see this is possible, choosing not to take the easy route in every situation becomes an act of self preservation.


This is the first step. Not stopping optimisation, because that is fantasy, but understanding it, naming it, and deciding where it does and does not get access to your life.


Because in a world that cannot stop optimising itself, the ability to set boundaries around what should not be optimised may be one of the most important skills we can develop.


 
 
 

Comments


AI Use & Processes: This site uses AI for research and assistance in putting content together. Learn more about how AI is used here.

​​

Note: The views expressed on this site are my own.

Privacy Policy: To view the Layered Future Privacy Policy, please follow this link.

 

​​

bottom of page