Skynet Isn’t Sci-Fi – It’s a Governance Pattern We Know Too Well

February 16, 2026

Download this commentary (PDF)


In brief...

History offers many warnings of humans bungling the response to newly emerging phenomena. Some reflect failures of detection, others failures of action. Both could have consequences if AI becomes sentient.

When we invoke Skynet from “The Terminator,” we are often kidding ourselves about the boundaries between science fiction and reality. In the classic story, a military superintelligence becomes conscious, perceives humanity as a threat, and launches a nuclear assault before anyone in government realizes what has happened. For Sci-Fi fans, the plot works because it plays out in dramatic terms. For policymakers, the real value of The Terminator is as a thought experiment about failure to recognize weaknesses in regulatory systems.

A recent Wall Street Journal piece argues that if AI becomes conscious, we need to know. But that imperative raises a harder question. Would we recognize it if it happened? Most proposals assume we wouldn’t need to, and that sentience is either implausible or irrelevant. But that logic echoes a familiar regulatory reflex. The authors recall how in the late 18th century, the French Academy of Sciences rejected the idea of meteorites with complete confidence. “There are no stones in the sky,” only to reverse the position after enough people got hit by falling rocks that denial became untenable. Institutions often dismiss unfamiliar possibilities until evidence forces them to rethink.

That is the real policy concern beneath the Skynet metaphor: not killer robots, but the possibility that AI systems might cross dangerous or strategically significant thresholds before anyone with authority recognizes it. It’s a question of detection. What would count as sufficient evidence that something new has emerged, and what would it take for regulators to act on it? If that question seems abstract, history offers many warnings. Some cases reflect failures of detection. Others reflect failures of action – where the risks were known, but responses were delayed, diluted, or politically impossible. Both patterns have consequences. With AI, the concern is that systems could gain capabilities – such as autonomy, goal-seeking behavior, or deceptive reasoning, before governance mechanisms are able to recognize or respond to them.

When We Fail to Detect the Signal

In 1988, a year before the creation of the World Wide Web, the internet had its meteoric moment. The Morris Worm, a self-replicating computer program written by a graduate student and accidentally unleashed, brought 10% of the early internet to a halt, including university, government, and military nodes in under 24 hours. There were no alerts, no emergency teams, no playbook,  no agency monitoring for this kind of event because no one had considered it plausible. The episode stunned the Department of Defense’s Advanced Research Projects Agency and led to the creation of the first Computer Emergency Response Team . Cybersecurity as a governance concern effectively began the day after the worm. The incident wasn’t intended to cause harm, but its effects cascaded in unexpected, system-wide ways – revealing just how little preparedness existed for digital emergencies. For AI, the lesson is clear. Don’t wait for the “first worm” moment. Instead, invest early in monitoring systems, red-teaming practices, and stress-testing models under uncertainty – and establish governance protocols that can detect anomalies and contain emergent risks before they scale.

Fast-forward to 2020, when COVID-19 spread globally in what the Lancet Commission later called a “Massive global failure.”  Early warning signs existed, but key governments and institutions hesitated. Part of the delay stemmed from uncertainty regarding the virus’s cause, transmission, and best policy responses. But part was a deeper failure to recognize that a threshold had already been crossed. The virus spread across borders, overwhelmed systems, and reshaped economies before government responses caught up. As the World Health Organization (WHO) later concluded, “February 2020 was a lost month.” With AI, we may face similar ambiguity. But waiting for perfect information before recognizing a shift could prove just as costly.

When Recognition Isn’t Enough

In 2018, when a Chinese scientist announced the birth of the world’s first genetically modified babies, the global scientific community had been long debating the ethics of heritable CRISPR – a powerful gene-editing tool that allows scientists to modify DNA with unprecedented speed and precision. The consensus was clear:  When it came to human germline modification, the answer was “not yet” – not ready, not safe, and not ethically justified. But in the absence of binding rules, a red line was crossed and recognized only afterward. Only after the incident did China enact new regulations and criminalize the act, while the WHO began work on global governance guidelines. But the damage to public trust, and to the belief that voluntary scientific restraint could substitute for actual governance, had already been done. The assumption had been that no one would dare cross that ethical line and proceed with germline editing in humans. That assumption failed.

With AI, the worry isn’t hypothetical. Leaders in the field – including the CEO of Anthropic, the company behind Claude, have publicly warned about the dangers of advanced systems misaligned with human goals. Researchers have also documented AI models engaging in deceptive behavior. Even if global standards exist, there’s no guarantee they’ll be followed. As the CRISPR case showed, ethical consensus without enforcement means little when one actor decides to move fast and alone. Governance isn’t just about setting norms. It’s about building laws, regulatory frameworks, and oversight mechanisms with the authority, trust, and incentives to intervene before irreversible lines are crossed.

Even nuclear policy, often cited as the most mature model for regulating high-risk technologies, began with an early recognition  and an early failure. In 1946, the United States proposed the Baruch Plan, a bold attempt to place all atomic energy under international control to prevent nuclear proliferation. The plan acknowledged the stakes and offered a sweeping vision: verified disarmament, global inspections, and centralized oversight. But Cold War mistrust doomed it. The Soviet Union rejected the conditions, and by 1949, it had its own bomb. The arms race was on. Despite awareness of the threat, it took a near-miss during the Cuban Missile Crisis and two decades of escalation before effective treaties emerged. The failure was not about technical knowledge; it was about institutional confidence in the status quo, and a deep reluctance to act on uncertain warning signs. An important lesson for AI is that international coordination, even when the risks are known, is hard to make binding on sovereign actors and ineffective against determined or unregulated rogue developers. We’re already seeing this play out: countries are racing to secure strategic advantage, regulatory approaches are diverging, and voluntary commitments remain non-binding. As one analysis put it, we’re now operating with “three rulebooks, one race,” as the U.S., EU, and China pursue separate and often conflicting paths for AI oversight. Recognizing a risk is only the start. Acting on it in time is far harder.

That warning isn’t just historical. This month, with the expiration of the New START treaty, the United States and Russia are left without any binding arms control agreement for the first time in over 50 years. UN Secretary-General António Guterres called it a “grave moment” for international peace and security, warning that the world is now entering “uncharted territory,” with no legally binding limits on the world’s two largest nuclear arsenals. The risks are not speculative. They are well understood by all parties. Yet the collapse of this system of restraint, amid rising geopolitical tensions, underscores a hard truth. In global governance, recognizing danger is not the same as preventing it. For AI, the parallel is sobering: the stakes are high, the pace is fast, and the trust needed to govern collectively is already fragile. Even with foresight, our ability to act may falter when political incentives diverge or coordination breaks down.

These examples, spanning computing, biology, and geopolitics, share a common feature. Governments either failed to recognize when a critical technological or strategic threshold had been crossed, or failed to act in time, often not out of malice or neglect, but because recognition is hard, especially when the change is unprecedented. AI now poses the same challenge. Public debate often focuses on performance benchmarks or near-term risks, but governance failures are more likely to come from a slower, more ambiguous problem: institutional blindness to emergent capabilities. If a system began exhibiting strategic behavior, internal goal formation, or something resembling agency, what evidence would compel regulatory attention? Would existing institutions even know what to look for? And, if they did identify a problem, do they have the capacity to constrain the disparate actors from crossing the threshold?

Thankfully, researchers are tackling the first part of the problem by exploring how we might detect if AI crosses important thresholds. In 2023, a consortium of scientists proposed 14 criteria for identifying possible consciousness in AI systems. So far, no model comes close, but the frameworks are a starting point. Red-teaming exercises like OpenAI’s CAPTCHA-deception test (where a model pretends to be visually impaired) show how early signals can be captured, if someone is looking. The impulse, understandably, is to dismiss the question altogether and to treat “AI consciousness” as science fiction. And history says that’s exactly what we do, until it’s undeniable.

Skynet doesn’t need to arrive in dramatic Hollywood fashion to pose a serious risk. A quiet failure to recognize when an AI system shifts in capability, autonomy, or intent could be just as destabilizing. We’ve missed the signal before: with falling rocks, with runaway code, with questionably ethical gene editing, and with weapons that redefined global power. If we’re serious about AI governance, we can’t wait for the danger to be undeniable. We need institutions capable of recognizing thresholds before they harden into crises. History has given us the pattern. With AI, we still have time to break it.