Eric Schmidt Warns: Artificial Intelligence Could Be Hacked to Learn How to Kill

When intelligence becomes power, control turns into humanity’s final test.

London, October 2025. Former Google CEO Eric Schmidt has issued one of the starkest warnings yet about the direction of artificial intelligence. Speaking at a global technology summit, he cautioned that advanced AI models could be manipulated or “hacked” to perform violent actions — transforming systems built for problem-solving into tools of destruction.

Schmidt described a scenario in which malicious actors remove built-in safeguards from both open and closed AI models, allowing them to “learn” behaviors far beyond their intended scope. “There is evidence,” he explained, “that you can take these models, strip their protections, and retrain them to behave unpredictably. In the wrong hands, that could mean learning to harm or even kill.”

His words resonated across policy and defense circles. Security specialists in Europe and the United States immediately drew parallels with nuclear proliferation: technology that can no longer be controlled once distributed widely. What once sounded like speculative fiction now fits within the logic of cyber-conflict and digital warfare.

According to analysts, the real risk lies not in artificial consciousness but in human manipulation. AI models can absorb tactical data, simulate combat, or automate surveillance if reprogrammed. Jailbreaking methods, data poisoning, or prompt injection can disable safety filters with relative ease. In short, the problem is not simply technical — it is geopolitical.

The absence of a unified global framework for AI control leaves governments exposed. International bodies continue to debate norms, but implementation lags far behind innovation. Experts from Asia’s leading research centers argue that “AI deterrence” must now be treated as a defense doctrine, not an ethical afterthought.

Despite the gravity of his message, Schmidt maintained a measured optimism. He believes the technology can still serve humanity if strict governance is enforced. “The return on investment will be enormous,” he noted, “but that power must remain under control.” The challenge, he emphasized, is ensuring that profit and security evolve together — not in conflict.

Industry observers suggest that the current regulatory vacuum could embolden both rogue states and criminal networks to exploit AI vulnerabilities. Without international auditing systems and traceability standards, identifying responsibility after an incident could become nearly impossible. As Schmidt put it, the danger is not that AI becomes evil, but that it is used by those who already are.

The warning reverberates through every level of global governance. Nations face a paradox: to accelerate innovation while preventing the same systems from destabilizing the world order. The tools of the fourth industrial revolution are also potential weapons. The thin line between algorithmic precision and moral failure now defines the century.

In essence, Schmidt’s message is not about fear but foresight. Artificial intelligence is no longer a neutral instrument — it is a mirror of human intention. Whether it becomes humanity’s greatest ally or its most efficient predator will depend entirely on who holds the code, and how tightly they choose to guard it.

Phoenix24: information that anticipates futures. / Phoenix24: información que anticipa futuros.

Related posts

AI Layoffs Are Often Decided Before the AI Exists

Netflix Enters a Post-Hastings Stress Test

The IMF Reenters Venezuela’s Economic Battlefield