When security researcher Harry scanned 200 iOS AI apps for vulnerabilities in February, he expected to find a few bad actors. Instead, he found 103 with the same basic flaw - open databases that anyone could read. That 51% failure rate was just the beginning.
Subsequent research revealed the problem is far worse. Of 198 iOS AI apps scanned in one study, 196 were actively exposing user data through misconfigured cloud backends. That’s a 98.9% failure rate, cataloging over 406 million records from 18 million users.
The AI chat app gold rush has created a security disaster.
The Chat & Ask AI Breach: 300 Million Exposed Messages
The most damning example came in February when Harry discovered an exposed Firebase database belonging to Chat & Ask AI, a popular “wrapper” app with over 50 million downloads that lets users access ChatGPT, Gemini, and Claude through a single interface.
The database contained 300 million private messages from 25 million users. Complete chat histories. Custom bot names. Timestamps. The AI models users chose. Everything.
The content was damning. According to Hackread, researchers found “deeply personal and disturbing” requests, including discussions of illegal activities and suicide assistance queries. The intimacy people share with AI chatbots - believing the conversation is private - makes these exposures particularly harmful.
The vulnerability? Firebase security rules set to public. A single configuration line that allows anyone with the project URL to read, modify, or delete data without authentication.
Codeway, the Turkish developer behind Chat & Ask AI, fixed the issue within hours of being notified on January 20. But the data had been exposed for an unknown period, and there’s no way to know who accessed it.
A Pattern, Not an Outlier
Chat & Ask AI wasn’t an isolated failure. Since January 2025, at least 20 documented AI app breaches have exposed tens of millions of records through nearly identical preventable causes.
The hits keep coming:
McHire (McDonald’s AI recruiting platform): 64 million applicant records exposed through default credentials (“123456”) and insecure direct object references. Names, emails, interview transcripts - all accessible.
Cal AI: 3.2 million users’ health data exposed through an open Firebase backend combined with weak 4-digit PIN authentication. Calorie counts, weight logs, and dietary patterns laid bare.
Tea App: Two separate breaches exposing 72,000 government ID images and 1.1 million private messages through Firebase Storage and Database misconfigurations.
Moltbook: 4.75 million records including API tokens, emails, and agent messages leaked because Supabase Row Level Security was never enabled.
Bondu AI Toy: Children’s chat transcripts exposed because any Gmail account could gain admin access.
Chattee/GiMe Chat: 43 million intimate messages and 400,000 NSFW images leaked through an unauthenticated Kafka broker.
The root causes repeat endlessly: misconfigured Firebase databases, missing Supabase security policies, hardcoded API keys in client code, absent or default authentication on cloud backends.
72% of Android AI Apps Leak Secrets
The problem extends beyond individual breaches to a systemic failure across the AI app ecosystem.
Cybernews analyzed 1.8 million Android apps on Google Play and found that 72% of AI apps leak hardcoded secrets. On average, each AI app leaks 5.1 secrets, with 81% related to Google Cloud Project identifiers, endpoints, and API keys.
The scale is staggering. Leaky Firebase and Google Cloud Storage instances have collectively exposed over 200 million files totaling nearly 730TB of user data.
The researchers identified 285 Firebase instances missing authentication entirely, leaving them openly accessible to anyone. Collectively, these databases leaked 1.1GB of user data just from the databases caught in that scan.
Worse, attackers are already exploiting these vulnerabilities. The research found hundreds of AI Android apps with hardcoded secrets had already been breached by automated exploits. This isn’t theoretical - it’s happening now.
Why This Keeps Happening
The failure has a simple explanation: speed over security.
Firebase’s default “test mode” grants public read and write access for 30 days during initial setup. Developers, racing to launch AI apps into a hyper-competitive market, either forget to implement proper security rules after this period or deliberately extend the permissive configuration to avoid breaking their apps.
“The breach was a familiar backend misconfiguration, made far more dangerous by the sensitivity of the data involved,” security expert James Wickett told Hackread.
AI apps compound the problem. Users treat chatbots like confidants, sharing information they’d never put in a form - medical concerns, relationship troubles, financial problems, dark thoughts. When those conversations leak, the damage extends far beyond typical data breaches.
A separate study by Escape found that among 5,600 “vibe-coded” apps (apps built rapidly using AI assistance), there were over 2,000 vulnerabilities and 400+ exposed secrets. Veracode research found that LLMs chose insecure code patterns 45% of the time when generating application code.
The tools people use to build AI apps quickly are themselves contributing to security failures.
What This Means For You
If you’ve used any third-party AI chat app - especially “wrapper” apps that provide access to multiple AI models - assume your conversations have been or could be exposed. There’s no way to know for certain without security researchers discovering and reporting the vulnerability.
The uncomfortable truth is that privacy-conscious users have two real options:
-
Use official AI platforms directly - OpenAI, Anthropic, and Google have far more resources and incentive to secure their infrastructure than small app developers chasing the AI gold rush. They’re not perfect, but they’re not leaving databases open to the internet.
-
Run models locally - Tools like Ollama let you run AI models on your own hardware. Your conversations never leave your machine. There’s no Firebase to misconfigure, no cloud database to expose. For anyone discussing sensitive topics with AI, this is the only true privacy guarantee.
Harry, the researcher who exposed Chat & Ask AI, built a tool at firehound.covertlabs.io to let users check if their apps are at risk. It’s a start, but the underlying problem requires more than individual vigilance.
The Bottom Line
The AI chat app market has grown faster than security practices can keep up. Until the industry demonstrates it can handle basic database configuration, treat every third-party AI app as a potential data leak. The question isn’t whether the next breach will happen - it’s which 300 million messages will be exposed next.