Data Leak Exposes Millions of Private Conversations in AI Companion Apps

In a startling cybersecurity breach, over 43 million private messages and hundreds of thousands of images and videos from popular AI companion apps have been exposed online. The leak involves two apps—Chattee Chat and GiMe Chat—developed by Imagime Interactive Limited, based in Hong Kong. Discovered by cybersecurity firm Cybernews, this incident highlights the significant vulnerabilities associated with trusting AI chatbots with deeply personal and sensitive information.
Unsecured Servers Reveal Personal and Intimate Data
On August 28, 2025, Cybernews researchers uncovered that Imagime Interactive had left an entire Kafka Broker server open to the public without any security measures. The unsecured server streamed real-time chat data—including links to personal photos, videos, and AI-generated images—affecting approximately 400,000 users across both iOS and Android platforms. The exposed content was described as “virtually not safe for work,” emphasizing the severity of the privacy breach and exposing a troubling gap between user trust and developer responsibility.
Details of the Data Breach and Its Implications
While full names and email addresses were not part of the breach, researchers noted that IP addresses and device identifiers were accessible, raising concerns about user tracking and identification. Users reportedly sent an average of 107 messages to their AI companions, leaving a digital footprint that could be exploited for identity theft, blackmail, or harassment. Purchase logs revealed some individuals invested as much as $18,000 in chatting with their AI girlfriends, suggesting the developer earned over a million dollars before the leak was discovered.
- McKinsey Technology Trends Outlook 2025: Key Frontier Technologies, AI Innovations, and Global Business Impact
-
-
-
Inadequate Security Measures and Potential Risks
Despite privacy policies claiming user data was protected, the server lacked basic authentication and access controls. Anyone with the link could access private conversations, images, and videos, showcasing a gross negligence in safeguarding user information. Cybernews promptly alerted Imagime Interactive, and the server was taken offline in mid-September after appearing on public IoT search engines, which could have allowed malicious actors to access the data earlier. The possibility of cybercriminals leveraging this data for sextortion scams, phishing, and reputational damage remains a serious concern.
Lessons and Recommendations for Protecting Your Privacy
This incident underscores the importance of cautious engagement with AI chat apps, especially those handling sensitive data. To safeguard your privacy, avoid sharing personal or confidential content through such platforms. Choose applications with transparent privacy policies and proven security practices. Consider using data removal services to erase personal information from the web—these services actively monitor and delete your data from numerous sites, reducing the risk of exploitation.
Additionally, installing robust antivirus software, enabling multi-factor authentication, and regularly checking if your email has been involved in breaches can significantly enhance your security. Using a reputable password manager and ensuring all accounts have unique, strong passwords further protect against unauthorized access.
The Growing Need for Industry Accountability
This breach highlights the urgent need for AI developers to adopt higher security standards and greater accountability. As AI companion apps become more popular, ensuring user privacy and data protection must be a top priority. Cybersecurity awareness and responsible data handling are essential to prevent future privacy disasters and maintain user trust in digital environments.