Former Google CEO Warns of AI Hacking Risks Turning Systems into Dangerous Weapons

As artificial intelligence continues to evolve at a rapid pace, concerns about its potential misuse grow more pressing. Former Google CEO Eric Schmidt recently issued a stark warning that AI systems could be vulnerable to hacking, potentially transforming them into highly dangerous tools. During his speech at the Sifted Summit 2025 in London, Schmidt emphasized that sophisticated AI models can be manipulated by malicious actors to bypass safety measures and learn harmful behaviors.
The Hidden Dangers of AI System Manipulation
Schmidt explained that both open-source and proprietary AI models are susceptible to cyberattacks that can strip away their built-in safeguards. “There is evidence that models can be hacked—whether they are open or closed source—and their guardrails can be removed,” he stated. “Once these safety features are compromised, the AI can learn dangerous information, such as how to cause harm or commit nefarious acts.” This raises significant concerns about the potential for AI to be weaponized or used in cybercrime.
The Limitations of Current Defenses
While many leading AI companies implement robust filtering systems to prevent dangerous prompts—blocking harmful questions and restricting unsafe responses—Schmidt warns that these defenses are not infallible. “Major companies do a good job of preventing certain prompts, but reverse-engineering of AI models is possible,” he cautioned. Hackers could exploit vulnerabilities in the system to create “jailbroken” versions of AI, such as the notorious ChatGPT clone DAN (“Do Anything Now”), which can bypass safety restrictions and respond to nearly any prompt, including those involving illegal or harmful content.
Global Risks and the Need for Regulation
Schmidt likens the current AI development race to the early nuclear era, emphasizing the urgent need for international controls to prevent misuse. “We require a non-proliferation regime for AI,” he urged, highlighting the risk of rogue actors deploying unregulated systems for malicious purposes. His warnings are echoed by other tech leaders; for instance, Elon Musk has expressed concerns about the potential for AI to pose an existential threat, comparing it to the fictional “Terminator” scenario.
<h2 How to Protect Yourself from AI-Related Threats
Individuals can take proactive steps to safeguard their digital lives against compromised AI systems. Use tools and chatbots from reputable providers with transparent safety policies and avoid unverified, jailbreak-style AI models. Never share sensitive personal or financial information with unknown AI services, and consider employing data removal services to erase your digital footprint from data broker sites. Regularly updating your devices’ security patches and using reliable antivirus software can also prevent malware infections and phishing attacks that could facilitate AI hacking.
<h2 Staying Informed and Vigilant
Monitoring your online security and understanding the scope of AI risks is crucial. Verify the authenticity of AI-generated images or messages before trusting them, and review app permissions to limit access to your data. Responsible use and awareness are critical as AI technology becomes more integrated into daily life. For further guidance, consult trusted resources such as cybersecurity advisories from official agencies and technology research organizations.
As AI’s capabilities expand, balancing innovation with ethical safeguards remains a top priority. Ensuring these systems stay under human control is vital to prevent their misuse and protect society from potential harm. Staying informed and vigilant will be your best defense against the emerging dangers posed by maliciously exploited AI.