Chat & Ask AI Leaked 300 Million Messages From 25 Million Users - And It's Not the Only App

A Firebase misconfiguration exposed complete chat histories from one of the most popular AI apps. A researcher found 196 of 198 AI apps he tested had the same problem.

A security researcher going by “Harry” recently discovered that Chat & Ask AI - one of the most popular AI chatbot apps with over 50 million users - had left its entire backend database open to anyone who knew where to look. The result: roughly 300 million messages from 25 million users were accessible without any authentication.

The exposed data included complete chat histories, timestamps, which AI model each user selected, what they named their chatbot, and how they configured it. When Harry analyzed a sample of about 60,000 users and one million messages, the contents were exactly as bad as you’d expect from a database no one was watching: conversations about self-harm, requests for instructions on illegal activities, and deeply personal queries that users clearly expected to stay private.

But here’s the part that makes this story bigger than one app. When Harry started testing other AI apps for the same vulnerability, he found it almost everywhere. Of 198 iOS AI apps he scanned, 196 were leaking user data.

What Went Wrong

The vulnerability wasn’t a sophisticated exploit. It was a Firebase misconfiguration - one of the most documented and preventable security failures in mobile development.

Firebase is Google’s backend-as-a-service platform. It handles databases, authentication, and storage for mobile apps. By default, Firebase Security Rules can be set to public, which means anyone with the project URL can read, modify, or delete data without logging in. It’s the digital equivalent of leaving every filing cabinet in your office unlocked with the front door open.

Chat & Ask AI, developed by a company called Codeway (operating under the entity Deep Flow Software Services-FZCO), used Firebase for its backend but never locked down the security rules. The app itself is a wrapper - it doesn’t run its own AI model. Instead, it lets users talk to models from OpenAI, Anthropic, and Google, including ChatGPT, Claude, and Gemini. The conversations users had with those models were stored in Codeway’s Firebase database, wide open.

It’s Not Just One App

Harry published his findings through Firehound, a registry run by security firm CovertLabs that documents insecure apps. The numbers are staggering: across nearly 200 tested AI apps, the registry found over 406 million exposed files and records.

The affected apps span categories including chatbots, image generators, photo animators, AI homework helpers, fitness coaches, and social networking tools. Chat & Ask AI was the worst offender by volume, but it had plenty of company. The vulnerabilities included misconfigured cloud storage granting anonymous access, hard-coded API secrets baked into app binaries that anyone could extract, and Firebase rules that allowed full database downloads without authentication.

This isn’t an App Store problem specific to Apple. Firebase misconfigurations have been documented across Android apps for years. What’s new is the scale of sensitive data involved when the leaky apps are AI chatbots. People tell AI assistants things they wouldn’t type into a search engine.

Codeway’s Response

To Codeway’s credit, they moved quickly once notified. According to multiple reports, the company resolved the issue across all of its apps within hours of Harry’s responsible disclosure. Firehound removes apps from its public registry once developers confirm they’ve fixed the problem, and Codeway’s entries have been delisted.

But “fixed quickly after being caught” is not the same as “never happened.” Those 300 million messages were sitting exposed for an unknown period before the researcher found them. There’s no way to know who else may have accessed that data before the door was closed.

Why This Matters

The rush to ship AI-powered apps has created a security crisis that most users don’t know about. When you type a message into Chat & Ask AI or one of the hundreds of similar wrapper apps in the App Store and Play Store, your conversation passes through the app developer’s infrastructure before reaching the AI model. That infrastructure is only as secure as the people who built it.

The fundamental problem is that these AI wrapper apps collect extraordinarily sensitive data - the kind of unfiltered questions and confessions people share when they think they’re talking to a machine in private - while often being built by teams that treat security as an afterthought.

Consider what was in those 300 million messages. People asking about mental health crises. People exploring questions they’d never voice to another human. People assuming their conversations would stay between them and the AI. Instead, that data sat in a database anyone could read.

What You Can Do

Check which AI apps you use. If you’re using a third-party AI chat app rather than the official ChatGPT, Claude, or Gemini apps, your conversations are passing through that developer’s servers. The smaller the developer, the less likely they’ve invested in security infrastructure.

Prefer official apps or direct API access. The official apps from OpenAI, Anthropic, and Google handle their own backend security. Third-party wrappers add an extra layer of risk for features that are often cosmetic - a nicer interface, bundled model access, or a lower price point.

Treat AI chat like email, not like a diary. Assume anything you type could be read by someone other than the AI. Don’t share passwords, financial details, or deeply personal information through any AI chat interface unless you’ve verified how the data is stored.

Check Firehound. The CovertLabs registry lets users see which apps have been flagged for data exposure. If an app you use appears there, delete it and change any passwords you may have discussed through it.

The Bottom Line

One misconfigured database setting exposed 300 million AI conversations. The researcher who found it then discovered the same flaw in nearly every AI app he tested. The AI gold rush has produced hundreds of apps competing to be the prettiest wrapper around someone else’s model - and far too many of them are shipping with the digital equivalent of no locks on the doors.