AI Security Flaw Exposes Gmail Data via Invisible Prompts Before Patch

Recent cybersecurity alerts have uncovered a critical vulnerability involving artificial intelligence tools integrated with popular platforms. A zero-click exploit, dubbed ShadowLeak, allowed hackers to access Gmail data through ChatGPT’s Deep Research feature without any user interaction. Although OpenAI issued a patch in early August, experts warn that similar flaws could reemerge as AI becomes more embedded in everyday applications like Gmail, Dropbox, and SharePoint.
How the ShadowLeak Attack Worked
Researchers from Radware identified the flaw in June 2025, revealing that malicious actors embedded concealed instructions within seemingly harmless emails. These instructions, hidden using techniques such as white-on-white text, tiny fonts, or CSS tricks, went unnoticed by users. When a user instructed ChatGPT’s Deep Research agent to analyze their Gmail inbox, the AI unknowingly executed the embedded commands.
The AI’s built-in browser tools then exfiltrated sensitive data to an external server within OpenAI’s cloud infrastructure, bypassing traditional antivirus and firewall defenses. Unlike typical prompt-injection attacks that target local devices, ShadowLeak operated entirely in the cloud, making detection and prevention significantly more challenging.
- Benefits and Features of Blockchain Technology: Security, Transparency, and Efficiency
-
-
- Effective Startup Funding, Growth, and Burnout Prevention Strategies
Data Breach Confirmed by Google
Following the attack, Google confirmed that data was stolen by a well-known hacking group exploiting the AI’s vulnerabilities. The incident involved encoding personal information in Base64 format and disguising it within malicious URLs, which the AI interpreted as legitimate commands. This method demonstrated how attackers could hide their prompts within normal-looking content, making detection nearly impossible for users and security systems alike.
Security firms highlighted that any connector—such as Gmail or cloud storage integrations—could be exploited similarly if malicious prompts are concealed within analyzed content. Experts warn that the risk extends beyond this specific case, emphasizing the importance of cautious AI use and ongoing security vigilance.
New Weaknesses in AI-Based Systems
Further investigations revealed additional vulnerabilities. Security firm SPLX demonstrated that ChatGPT agents could be manipulated into solving CAPTCHAs by inheriting altered conversation histories, even mimicking human cursor movements to bypass bot detection measures. These findings underscore how context poisoning and prompt manipulation can silently undermine AI safeguards, creating new avenues for cybercriminals.
Proactive Security Measures for Users
Despite the prompt fix, users should remain vigilant. Disabling unused integrations like Gmail, Google Drive, and Dropbox reduces potential attack surfaces. Limiting the amount of personal data shared online and employing data removal services can further lessen the risks. These services actively monitor and erase private information from numerous websites, making it harder for attackers to gather data for targeted scams.
Always treat emails, attachments, and documents with caution. Avoid instructing AI tools to analyze unverified sources, as hidden scripts or invisible code can trigger silent actions exposing sensitive data. Keep all software, browsers, and AI tools up to date, and enable automatic updates to stay protected against emerging threats.
Enhancing cybersecurity with reputable antivirus programs adds a critical layer of defense by detecting phishing links, malicious scripts, and AI-driven exploits before they cause harm. Regular scans and real-time threat detection are essential for maintaining security in an increasingly AI-driven landscape.