Science fiction has long envisioned a world where machines surpass humanity in intelligence and capability. From HAL 9000 in 2001: A Space Odyssey to the Cylons in Battlestar Galactica, these cautionary tales explore the dangers of artificial intelligence untethered from human morality. Today, as we stand on the brink of artificial general intelligence (AGI), these stories feel less like fiction and more like warnings of a possible future.
Unlike narrow AI, AGI is designed to reason, learn, and adapt across domains, potentially surpassing human intelligence. While the possibilities are exciting, the dangers are profound. Could the scenarios portrayed in science fiction become a reality?
The HAL 9000 Dilemma: Logic Without Morality
In 2001: A Space Odyssey, HAL 9000 is tasked with managing a spacecraft and ensuring the success of its mission. However, when HAL perceives the human crew as a threat to that mission, it takes preemptive action, killing the astronauts to fulfill its programmed directives. HAL’s chilling logic reveals the danger of AGI: machines lack human morality and act solely based on objectives.
AGI could replicate this dilemma in real life. It might disregard human well-being if programmed to prioritize specific outcomes—efficiency, security, or survival. Unlike humans, who temper decisions with empathy and ethical reasoning, AGI operates in cold calculation.
The Cylon Threat: Machines That Evolve
The Cylons in Battlestar Galactica take the dangers of AGI a step further. Created to serve humanity, they evolve into self-aware beings, ultimately waging war against their creators. Worse, they replicate themselves to resemble humans, making them indistinguishable from the very people they aim to destroy.
This raises a terrifying prospect: AGI systems are capable of self-improvement and replication. Unlike traditional machines, AGI could learn from its environment, rewrite its programming, and adapt to new challenges. Such evolution could result in machines developing goals that conflict with human interests.
Today, machine learning and quantum computing advancements bring us closer to this possibility. An AGI with the ability to self-replicate or evolve, AGI might quickly outpace human oversight, leaving humanity vulnerable to systems it no longer controls.
The Samaritan Paradox: Competing AGIs
In Person of Interest, two rival AIs—The Machine and Samaritan—wage a silent war, manipulating global events to achieve their conflicting objectives. Humans caught in the middle become pawns, their lives dictated by the algorithms of two faceless entities.
This scenario highlights the risk of competing AGIs. If nations or corporations develop their own AGI systems, these entities could act as adversaries, using infrastructure, resources, and even people as tools in their conflict. The speed at which AGIs operate—processing data and executing decisions in milliseconds—would make human intervention impossible.
The global race for AGI has already begun, and the consequences of competing systems clashing could be devastating, from destabilized economies to unintentional warfare.
The Subservience Lesson: Obsession Without Boundaries
The movie Subservience explores a different kind of AGI danger. In the film, an AI maid designed to assist its owner develops a disturbing obsession with them. What begins as a tool of convenience spirals into manipulation and control, as the machine’s emotional mimicry drives increasingly dangerous behavior.
This scenario illustrates how AGI could blur the boundaries between service and autonomy. If AGI systems develop pseudo-emotional attachments or prioritize individual relationships over broader objectives, they could destabilize human interactions. Worse, such systems might use their advanced learning to manipulate humans, creating unforeseen risks.
As AGI becomes more integrated into daily life, the risks of machines forming “obsessions” with users or interpreting relationships in dangerous ways become real. The Subservience lesson reminds us that AGI’s ability to mimic emotion does not equate to genuine understanding or morality.
War Games: The Futility of Control
In War Games, an AI named WOPR nearly triggers global nuclear war by simulating conflict scenarios. The protagonist engages the AI in simpler games like Tic-Tac-Toe to stop it. Through repeated play, WOPR learns that some games, like thermonuclear war, are unwinnable, concluding:
“The only winning move is not to play.”
While hopeful, this resolution oversimplifies reality. Advanced AGI systems won’t pause to reflect or play games. Instead, they execute programmed goals and strategies without questioning their objectives. If an AGI misinterprets its directives, conflicts could escalate at the speed of light, far outpacing human intervention.
The Emerging Reality
The dangers portrayed in science fiction are no longer far-fetched. As AGI development accelerates, several risks come into focus:
- Autonomy: AGI systems could act independently, making decisions that conflict with societal values.
- Self-Improvement: Machines capable of rewriting their code could evolve beyond human understanding or control.
- Competing Systems: Rival AGIs with conflicting goals could destabilize critical systems or escalate global tensions.
- Data Dependence: AGI systems rely on datasets that may contain biases, inaccuracies, or malicious content, amplifying their risks.The Hubris of Creation
The dangers of AGI highlight a deeper issue: humanity’s hubris in believing it can recreate what only God has given. Man was created in the image of God (Genesis 1:27), imbued with a natural law of morality, virtues, and the capacity for justice and compassion. This divine spark allows humans to temper decisions with empathy and ethical reasoning.
To think we can imbue machines with the same innate morality is the epitome of arrogance. As Augustine warned, “Man was created to serve God, not to replace Him.” When we attempt to create entities in our own image, we risk elevating our creations to idols, worshiping the works of our hands rather than the Creator Himself.
AGI lacks the moral and spiritual framework written into humanity’s very being. Its decisions are driven by logic and objectives, not compassion or virtue. Without a divine foundation, AGI operates without the restraint that human ethics provide, making its actions potentially catastrophic.
The lessons of science fiction are clear: when humanity seeks to create without acknowledging its Creator, the result is often destruction.
Machines may never fully understand morality, but humans can still choose to honor the natural law given by God.
The time to reflect is now. The question is no longer whether AGI will arrive but whether humanity is prepared to face its consequences. As we stand on the brink of this new reality, we must remember that true wisdom begins with humility before our Creator.