The Singularity:
Advancement or Annihilation?
June 2, 2025
by Jaymie Johns
For anyone who doesn’t know what The Singularity is, here’s a quick explanation: the singularity is the hypothetical future moment when artificial intelligence becomes capable of self-improvement beyond human control. It’s the point where machine intelligence could surpass our ability to understand, guide, or contain it. And it’s no longer just theoretical—it’s approaching us faster than most people realize.
According to a recent prediction that’s making the rounds, we could hit the technological singularity in just six months. No, you didn’t misread that. Six months. That’s right—as soon as early 2026, we may cross the threshold where artificial intelligence becomes smarter than humans and starts upgrading itself faster than we can keep up. The moment we stop being the pinnacle intelligence on the planet. The moment the plot of every cautionary sci-fi movie stops being fiction and starts being Tuesday.
The singularity has always sounded like one of those theoretical physics concepts people throw around to sound smart at parties. But now? Now it’s math. Now it’s probability curves and exponential growth and headlines backed by cold, accelerating reality.
Ray Kurzweil, the longtime singularity prophet, once said it’d become reality by 2045. Then others said it would be closer to 2029. And now? Six months. You can buy a mattress today and it’ll still be under warranty when the machines start thinking for themselves.
Of course, the optimists will say this is great. Imagine the expanse of medical breakthroughs, diseases will be cured, no more unsolved problems. Sure, maybe. But they’re forgetting that human history is littered with technology we didn't fully understand being used for purposes we would never have fathomed.
We built nuclear fission hoping it would provide energy for everyone, then used it to vaporize cities. We created social media to connect the world, and now we can’t tell what’s real or staged or fed to us by a predictive algorithm trying to “optimize” engagement. And most of us are too zoned into the latest viral TikTok trend to notice the world is collapsing around us.
So if you’re asking whether the singularity will be good or bad, the answer is: yes. will be both. will be everything. It’ll be simultaneously the most beneficial the world has — or ever will — see.
But it will also be the most destructive.
We’re not ready. Our laws, our ethics, our minds — they aren’t ready. Most people can’t even define the word “singularity” without thinking of black holes or Catholic liturgy.
We’re talking about the rise of a new kind of intelligence—something that could have agency, power, and autonomy, without the moral framework we (pretend to) operate by. What happens when AI can lie better than a politician, manipulate better than an ad executive, and strategize better than a general? Who programs it to distinguish right from wrong when even we can’t agree on what that means?
The singularity isn’t just the apex of technological advances. It’s a defining moment in human history. We’re not just building tools that will help society — we’re building systems that will redefine us. And when those decisions impact human lives, the question isn’t just whether the tech works. It’s whether we’ve defined what 'right' even means in a world where our values aren’t universally agreed upon.
Do we let machines decide what qualifies as life, liberty, or justice? Who gets to program the ethics of a being that can rewrite itself? What happens when one group’s moral code becomes embedded in every machine across the globe? And what if we get it wrong?
The singularity demands more than engineers and coders. It demands ethicists, theologians, historians. People willing to ask hard questions about power, exploitation, fairness, consent, and human worth. Because if we don’t build a moral foundation into what comes next, the code that governs our future will be written in the absence of humanity’s most important debates.
The singularity isn’t a clean equation — it’s a collision of intention, authority, and responsibility. And most of us still don’t even know we’re barreling toward imminent destruction at warp speed.
This article isn’t just for techies; It’s for all of us. Because enough people aren’t talking about the implications of this advancement.
In the words of Ian Malcom in the best of the Jurassic Park films - the 1993 masterpiece - “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
We can’t simply ignore the other problems, either - the ones it seems no one stops to consider. If the AI surpasses us in our own capability of improving it - if the machines become sentient - what if they develop feelings? I’m not talking feelings like the surface-level “I think this flavor of ice cream is the best”, but the raw emotions that humans are incapable of separating themselves from: attachment, grief, love. If they can feel affection, they can feel loss, and there’s nothing to keep it from the desire some have to make others share in their loss - misery loves company, after all.
And let’s not forget about love; if AI becomes capable of falling in love, we’re done. Because I don’t mean the cutesy cliche teen romance of holding hands and hooking up after a party; I’m talking about the love that is inescapable, the kind that reveals who you are. I mean the type of love that is so deep that it’s agonizing, the kind that roots itself in your soul and redefines you. Because if a robot can experience that, there aren’t any safeguards.
We really should consider the ramifications of developing a super machine that has the very real ability to destroy cities also having the capability of being so in love that it would burn down the world.
Media Morality Evaluator