How AI-Powered Email Summaries Can Be Exploited to Hide Phishing Attacks

The Ubiquity of Artificial Intelligence in Our Daily Lives
Artificial intelligence (AI) has become an integral part of modern technology, appearing in devices ranging from smartphones and vehicles to household appliances like washing machines. Just recently, I came across a new AI-enabled gadget, underscoring how pervasive this technology has become. While the advancements might seem high-tech or even futuristic, there’s no denying that AI has significantly streamlined our daily routines and improved productivity.
Transforming Work and Creativity with Generative AI
Generative AI tools, such as chatbots like ChatGPT, are among the most recognizable examples of this trend. They assist with writing, creative projects, customer service, and more. However, despite their benefits, these AI systems are not without vulnerabilities, especially as they become more integrated into essential productivity tools.
Google Workspace’s AI Model and the Emerging Security Threat
If you use Google Workspace — including Gmail, Docs, or Sheets — you might be familiar with Gemini, Google’s powerful AI model embedded within these apps. Recent research indicates that attackers are now exploiting this AI to manipulate email summaries, embedding malicious prompts that can deceive users and security systems alike.
What Is Prompt Injection and How Does It Work?
Researchers from Mozilla’s 0Din team have identified a vulnerability in Google’s Gemini that allows malicious actors to insert hidden commands into email summaries. This technique, called prompt injection, involves embedding invisible instructions within emails that trick AI into acting on concealed prompts.
- Attackers embed invisible commands using HTML and CSS, setting font size and color to make prompts undetectable to users.
- When Gemini generates a summary, it interprets these hidden instructions, which may include fake security alerts or misleading instructions.
For example, a proof-of-concept demonstrated how Gemini could falsely warn a user about a compromised password and include a fake support number, creating a convincing scam scenario. Because these summaries are integrated into Google Workspace, users tend to trust the information they see, making this attack particularly effective.
Current Defenses and Google’s Response
Google has been aware of prompt injection issues and has implemented measures to prevent such attacks since 2024. Despite these efforts, the recent findings suggest that some vulnerabilities still exist, allowing attackers to bypass existing protections.
In a statement, a Google spokesperson confirmed ongoing efforts to strengthen security, noting:
“Defending against prompt injections and similar threats remains a top priority. We have deployed multiple safeguards and continue to improve them through rigorous testing and red-teaming exercises to stay ahead of adversaries.”
Google also assured that there have been no reports of active exploitation of this specific vulnerability so far.
Protecting Yourself from AI-Driven Phishing Attacks
Practical Tips to Stay Safe
- Verify critical information: Always cross-check security alerts, phone numbers, or links through official sources rather than relying solely on AI summaries.
- Be cautious with unexpected messages: If an email appears suspicious or is from an unknown sender, avoid using the AI summary feature for that message. Read the original email directly.
- Watch out for urgency cues: Phishing attempts often create a sense of urgency or request sensitive data. Pause and scrutinize such prompts before responding.
- Use robust antivirus software: Protect your devices with reputable antivirus solutions that can detect malware, ransomware, and phishing scams.
- Keep your software updated: Regularly update Google Workspace and your browser to benefit from the latest security patches.
- Limit sharing personal data: Reduce your digital footprint by removing personal information from data broker sites, making targeted scams more difficult for attackers.
Managing AI Features and Mitigating Risks
Since there is no single toggle to disable all AI functionalities like Gemini across Google services, consider turning off specific features if you’re concerned about security. For example, on desktop or mobile devices, you can disable or limit AI summaries:
Disabling Gemini on Desktop and Mobile
- On iPhone: Access your Gmail app settings and look for options related to smart features or summaries to disable.
- On Android: Settings may vary depending on the device, but generally, you can navigate to app-specific AI features and turn them off.
While these steps reduce the risk, be aware that some AI integration may still be active, and complete removal might not be possible yet.
Understanding the Broader Context of AI and Phishing
This emerging vulnerability highlights how phishing tactics are evolving in tandem with AI technology. Instead of relying solely on obvious red flags like misspelled URLs or suspicious attachments, attackers now target trusted AI-powered systems that help filter and interpret messages. As AI becomes more embedded in productivity tools, the potential for subtle prompt injections to facilitate social engineering grows.
How Safe Are You with AI-Generated Content?
As AI-driven scams become more sophisticated, users must develop a cautious approach when trusting AI-generated summaries and alerts. Regularly verifying critical information and remaining vigilant can significantly reduce your risk of falling prey to these advanced social engineering tactics.