Cybernews researchers scanned 1.8 million Android apps on the Google Play Store and found a security crisis hiding in plain sight. Of the 38,630 apps that specifically marketed AI features, 72% had at least one secret - API keys, passwords, or database credentials - hardcoded directly into their source code.
The result: over 200 million files totaling nearly 730 terabytes of user data sitting in publicly accessible cloud storage buckets and databases, waiting for anyone to find them.
What Got Exposed
The leaked data spans every category you’d expect from AI apps that ask users to share personal information: private messages and chat histories, location data, contact lists, photos and videos, health information from fitness apps, and authentication credentials.
On average, each leaky app exposed 5.1 separate secrets. The vast majority - 81% - were related to Google Cloud services: project identifiers, Firebase database endpoints, and API keys that could grant access to backend systems.
Researchers identified 285 Firebase databases with no authentication at all. No password, no login, no barriers. Anyone who knew the database URL could read, modify, or delete everything stored there.
Already Under Attack
Here’s the part that makes this worse: many of these databases had already been compromised.
In 42% of the exposed Firebase instances, researchers found tables labeled “proof of concept” - the calling cards left by automated exploit tools that scan the internet for exactly these vulnerabilities. The databases weren’t just open; they were already being looted.
The research notes that these compromised databases appeared to be “largely unmonitored,” meaning the app developers likely had no idea attackers had already accessed their users’ data.
Why This Keeps Happening
The root cause is depressingly simple: developers are hardcoding credentials because it’s faster than setting up proper secret management.
When building an app, you need to connect to databases, cloud storage, payment processors, and AI model APIs. Each of these requires credentials. The correct approach is to store these secrets securely on servers and retrieve them at runtime. The lazy approach is to paste them directly into the app’s code.
The AI gold rush has made this worse. Developers racing to ship AI-powered apps before competitors aren’t spending time on security fundamentals. The Cybernews researchers note that time pressure and competitive markets mean security mechanisms often aren’t implemented before apps reach the store.
This isn’t an Android problem alone. A separate study using the Firehound scanning tool tested 198 iOS AI apps and found 196 of them leaking user data through similar misconfigurations. About 70% of the 156,000 iOS apps examined in the broader research also contained hardcoded secrets.
Real-World Casualties
The abstract numbers become concrete when you look at specific breaches.
Chat & Ask AI, a popular chatbot app with over 50 million downloads, exposed 300 million messages from 25 million users through a misconfigured Firebase database. The exposed conversations included discussions of self-harm, requests for help with illegal activities, and deeply personal questions users thought were private.
Video AI Art Generator, an AI video creation app, exposed 8.27 million files through an unsecured Google Cloud Storage bucket - including 1.57 million private user images and over 385,000 personal videos. The data had been accessible since the app launched in June 2023.
Both apps were developed by Codeway, a Turkish company. They fixed the vulnerabilities within hours of being notified, but the data had been sitting open for months or years before anyone told them.
What This Means
If you’ve used any AI app on Android or iOS that wasn’t made by OpenAI, Google, or Anthropic directly, your data may have passed through infrastructure exactly like this.
The problem is structural. Mobile app development favors speed over security. App store review processes don’t check for hardcoded credentials. Users have no way to know whether an app handles their data responsibly before downloading it.
The AI wrapper economy makes this particularly dangerous. Most AI chatbot apps don’t run their own models - they’re middlemen connecting users to GPT-4, Claude, or Gemini through APIs. Every conversation passes through the wrapper app’s servers, where it can be stored, logged, and - if the developer cut corners - exposed to the internet.
What You Can Do
Audit your AI apps. Check which AI applications you have installed. If they’re third-party wrappers rather than official apps from major providers, consider whether you need them.
Assume your data is exposed. Treat AI conversations like unencrypted email. Don’t share passwords, financial details, or deeply personal information through any chat interface you don’t fully trust.
Check breach databases. Services like Have I Been Pwned can tell you if your email appeared in known breaches. If you’ve used AI apps, check periodically.
Prefer official apps or local models. The official ChatGPT, Claude, and Gemini apps handle their own security. Better yet, run local models through tools like Ollama, where your conversations never leave your device.
The Bottom Line
Three-quarters of Android AI apps are shipping with secrets exposed in their code. Hundreds of databases have been left open to the internet with no authentication. Automated attackers have already found and compromised many of them. And users have no way to know before they download.
The 730 terabytes of exposed data isn’t an anomaly. It’s what happens when an industry prioritizes shipping fast over shipping safe, and when app stores don’t check what developers are actually doing with user trust.