Neurality: Being Balanced About AI
- Glenn

- Jul 30
- 6 min read
Updated: Sep 28

Picking a side has always been second nature. Whether it’s politics, music, or which sports team to support, our default setting is to choose a side and then stick to it. And while that can be limiting or even toxic for our individual lives, taking a binary stance on AI has the potential to be damaging to society as a whole.
That’s because discussing AI shouldn’t be about preference or opinion - it should focus on securing a safe, beneficial future for everyone. AI will impact all of us (it already is), and failing to see it from a weighted perspective risks undermining the benefits it could bring, while inflaming the very issues we need to face head-on. This is a complex and deeply important technology - one that could reshape society in myriad ways. And it’s therefore essential that we feed into the debate with thought, big picture thinking, and, most importantly, balance.
Binary Bias
Anyone who takes note of the news or uses social media will see there is a clear divide forming around the subject of artificial intelligence. On one side are the AI haters - often creatives (like me), writers, and others whose industries are already starting to see job displacement. And that’s understandable - nobody takes kindly to anything that impacts their livelihoods. Also in this camp are those who view AI as an existential threat, or a challenge to the supremacy of humanity and religion. Again, these individuals are asserting understandable perspectives based on the fear of what AI might do to society, or disagreeing with its existence for ideological reasons.
On the other side are the AI lovers - mostly younger, digitally-native individuals, often in tech-related fields. Many have grown alongside algorithms, automation, and digital integration as part of everyday life, and therefore see AI as an extension of what they already know. Some have even profited from AI or related technologies. These are the people promoting AI as a revolutionary tool that will elevate human life. Within this cohort exist some who could be described as ‘tech evangelists’ - individuals who struggle to see the potential harm of AI in light of the life-changing benefits it might bring.

Problematically, both groups are right. But also, both are wrong. And that’s because AI is not a binary issue, and framing it as such only clouds the conversation. Those in opposition to AI often exaggerate its dangers or position it as a dystopian threat to society, while ignoring its potential. Conversely, those in favour often minimise its risks and overhype its capacity to transform the world for the better.
Neurality
For this reason, I believe we desperately need more people to take a new position - a balanced stance that accepts the good and the bad of AI. One that doesn’t sit in the middle to be non-committal, but instead enables us to highlight the advantages of AI while also warning of its dangers. I call it Neurality. A neutral stance on neural networks.
Ironically, the very systems that drive people toward binary perspectives are themselves fuelled by AI. Social media algorithms - often built using neural networks - have played a key role in leading people into online echo chambers where the loud opinions of the few are amplified and emulated. The platforms most of us use daily are designed to favour extreme views and black-and-white thinking. This isn’t just a problem for the AI conversation, but it is perhaps one of its most concerning casualties. Because the decisions we make regarding AI in the coming years will have ramifications that might endure for centuries.
But if AI can steer us into these entrenched camps, perhaps it can also help us find a better way out of them.
AI as an Aspirational Model
What’s especially interesting is that, unlike humans, AI can be programmed for balance. Most AI models - such as ChatGPT, Gemini and Claude - are designed to weigh up perspectives, examine multiple viewpoints and offer reasoned responses. Of course, AI is only as balanced as how well it was trained and what data it’s been fed. But regardless, AI models are capable of neutrality.
I feel this makes AI something more than just a tool - it makes it a mirror. It might also make it a model that we can learn from and follow; a demonstration of what a balanced mind can look like. While we are driven by emotion, bias, and an evolved tendency to pick sides, AI can observe, reason and form perspectives without tribalism. It can also offer thoughts without ego. It can pause to assess, instead of rushing to respond. And importantly, it can reach conclusions unclouded by prior experience, financial incentives or irrationality.

And that, for me, is one of the most overlooked qualities of this technology. Not its power or potential - but its approach. Its method of thinking. AI might be synthetic, but the balance it models is deeply aspirational for human society - after all, we built it. It represents the kind of fairness and consideration we like to imagine we possess, but so rarely display.
Cognitive Discomfort
Education, at least from my experience teaching in the UK, is built around the idea of balance. Students are taught to weigh evidence, explore context, and consider multiple interpretations before arriving at a conclusion. And yet, as these students progress through their education, they simultaneously exist in a society that rarely behaves in this way.
Because despite what we teach and the ideals we hold high, our brains are not wired for balance. Our neural architecture has evolved for survival; calorie intake, the avoidance of harm and reproduction, not objective reasoning. Quick judgements were once a matter of life and death, and in this landscape, certainty felt safer than ambiguity. Meanwhile, belonging to a group was more beneficial than standing alone. These approaches helped our ancestors survive, but they now conflict with the needs of modern life.
Our bias towards binary thinking is likely because it’s quicker and easier. It requires less cognitive horsepower than detailed analysis, and enables us to reach conclusions swiftly. Tribalism and hard-baked perspectives save energy and time. They enable us to form opinions quickly and feel confident in them, without having to wrestle with the complexity of nuance. But in the 21st century, with challenges like climate change, automation, and AI redefining our systems, that instinct has become a liability.
AI is not a simple force, and it cannot be distilled down to simplistic categories like good or bad. That’s because it will displace jobs while saving lives. It will create inequality while providing access. And it will threaten some while liberating others. This is why it is so important to engage with this discussion from a position of Neurality - accepting the contradictions present in this topic, rather than trying to eliminate them.
Lessons From the Past
Of course, a tendency to pick sides is not new. History is filled with examples where society has fractured over transformational changes or ideas. Most comparable to the AI problem is The Industrial Revolution which separated those who gained from automation from those who lost their livelihoods as a result of progression. Then there’s Darwin’s theory of evolution that split religious believers from scientific thinkers. Nuclear energy was hailed as a miracle by some, and a nightmare by others and the internet, too, was embraced as a tool for liberation while simultaneously being dismissed as a danger to privacy, safety and truth.
In nearly all of these cases, the damage wasn’t caused by the technology itself, but rather by the people who refused to see beyond their chosen camp. The environmental movement ignored the potential benefits of nuclear energy, and early internet optimists ignored the toxic consequences of unregulated platforms. In each case, a failure to maintain balance exacerbated problems that could have been mitigated with broader thinking.
And now, unfortunately, this pattern is repeating with AI. And unless we shift our stance, we risk making the same mistakes.
Reframing the Debate
We need blind optimism as much as we need panicked resistance - both are detrimental to the evolution of our society in the wake of artificial intelligence. Instead, what we need is clarity and the willingness to zoom out. We also need bravery and effort - exploring contradictory perspectives in a world that applauds binary viewpoints can be tough, and embracing nuance can be hard work.
But regardless, AI shouldn’t be a battleground; it should be a question mark. This is a watershed moment - an opportunity to reassess how we operate as a species and how we might want to operate moving forward. Because if we let the conversation be dominated by tribal loyalty, we will lose the opportunity to shape this technology in ways that benefit us all.

This is where Neurality comes in. We need to reframe the debate - not in defence of AI or against it, but in defence of reason. We must remind ourselves that planting a flag in one camp or the other is not a requirement. Sometimes, the smartest thing we can do is exist in the space between competing truths - sitting in the middle long enough to see what emerges.
Conclusion
The future that AI offers us isn’t binary. And if we’re to navigate what comes next with any measure of success, we need to stop pretending otherwise.
AI may be artificial, but the mindset it offers - that of balanced thinking, analysis and reasoned perspectives - is one of the most human things we can aim for. And in that sense, it might just become the teacher we didn’t know we needed.
Let’s stop choosing sides and start choosing sense. Let’s embrace Neurality.
Watch the accompanying YouTube video, and join the conversation now...






Comments