AI Security Roundup: 300 Million Messages Leaked, Microsoft Copilot Bug, and the Vibe-Coding Crisis

This week in AI security: Chat & Ask AI exposes 300 million messages, Microsoft patches Copilot email vulnerability, and vibe-coded apps prove trivially hackable.

The AI security situation in February 2026 is grim. A single misconfigured database exposed 300 million chat messages from 25 million users. Microsoft admitted Copilot was reading confidential emails it shouldn’t have accessed. And a researcher demonstrated a zero-click takeover of a BBC journalist’s laptop through a “vibe-coded” app platform.

Here’s what went wrong this week, who’s affected, and what you can do about it.

Chat & Ask AI: 300 Million Messages Exposed

A security researcher found an exposed Firebase database belonging to Chat & Ask AI, an app with over 50 million users across Google Play and the App Store. The database contained complete chat histories, the language models used for each conversation, and user settings.

The leak wasn’t subtle. Firebase security rules were set to public, allowing anyone with the project URL to read data without authentication. The exposed conversations reportedly included discussions of illegal activities and requests for suicide assistance.

Parent company Codeway fixed the issue within hours of responsible disclosure. But here’s the uncomfortable question: how many other AI chat apps have the same misconfiguration?

According to research from CovertLabs cited by Barrack.ai, the answer is “most of them.” Their 2026 scan found that 196 out of 198 iOS AI apps were actively leaking data through Firebase misconfigurations. That’s 98.9%. On Android, the figure was 72%.

A security researcher named Harry created an automated scanning tool called Firehound and found 103 out of 200 apps tested had exploitable Firebase configurations.

The Firebase Epidemic

Between January 2025 and February 2026, at least 20 documented AI app breaches trace to the same root causes:

  • Misconfigured Firebase databases with public read access
  • Missing Supabase Row Level Security that defaults to open
  • Hardcoded API credentials in client-side code
  • Unauthenticated cloud backends accepting any connection

Three photo identification apps by OZI Tech leaked user photos, documents, and GPS coordinates through Firebase on February 11, affecting over 150,000 users. The Bondu AI toy exposed 50,000 children’s chat transcripts in January because any Gmail account was granted admin access. Moltbook leaked 1.5 million API tokens because Supabase Row Level Security was never enabled.

These aren’t sophisticated attacks. They’re basic configuration errors that would fail a first-year security audit.

Microsoft Copilot: Reading Emails It Shouldn’t

Microsoft confirmed that a bug in Copilot allowed the AI to read and summarize confidential emails despite users applying Data Loss Prevention (DLP) labels meant to protect them.

The vulnerability affected the “work tab within Copilot Chat.” Even when organizations explicitly labeled emails as confidential to shield them from automated processing, Copilot ignored those labels and processed messages from Sent Items and Drafts folders anyway.

Microsoft attributed the failure to “an unspecified code issue” and began rolling out fixes in early February. The company hasn’t disclosed how many Microsoft 365 business customers were affected.

The timing is notable. Microsoft’s own Cyber Pulse report states that while over 80% of Fortune 500 companies deploy AI agents, only 47% have adequate security controls for managing generative AI platforms. This bug demonstrates why that gap matters.

Orchids: The Vibe-Coding Crisis

UK security researcher Etizaz Mohsin spent weeks trying to warn Orchids, a “vibe-coding” platform that lets users build apps using natural language prompts, about critical vulnerabilities he discovered in December 2025. The company, which has fewer than 10 employees, said they “possibly missed” his warnings because they were “overwhelmed.”

On February 13, the BBC published Mohsin’s findings, including a live demonstration where he gained full remote access to a journalist’s laptop - changing the wallpaper and creating files - with no user interaction required.

The vulnerability was a zero-click attack targeting authentication and input handling mechanisms. At publication time, it remained unfixed.

This goes beyond one platform. The vibe-coding security crisis highlights a systemic problem: AI-generated code frequently lacks defensive coding practices that experienced developers build instinctively - input validation, secure authentication flows, protection against SQL injection and XSS.

AI models are trained on vast repositories of public code, much of which contains known vulnerabilities. The models don’t distinguish between secure and insecure patterns. They reproduce what they’ve seen.

As vibe-coding tools lower technical barriers, they’re creating an expanding surface of vulnerable applications built by people who may not understand the security implications of their prompts.

OpenClaw: Infostealers Target AI Agents

The OpenClaw situation continues to deteriorate. We covered the initial CVE-2026-25253 vulnerability on February 13, which allowed one-click remote code execution affecting 135,000+ exposed instances.

Now there’s a new twist: mainstream infostealers are targeting OpenClaw configuration files. Hudson Rock identified a Vidar variant (an off-the-shelf stealer active since 2018) harvesting three critical file types:

  • openclaw.json - Gateway authentication tokens, emails, workspace paths
  • device.json - Cryptographic keys for secure operations
  • soul.md - The AI agent’s operational principles and behavioral guidelines

The malware isn’t purpose-built for OpenClaw. It uses generic file-harvesting routines that happen to capture AI agent configurations. But the impact is significant: stolen gateway tokens let attackers remotely connect to victims’ OpenClaw instances or impersonate authenticated clients.

Hudson Rock’s CTO called this “a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the ‘souls’ and identities of personal AI agents.”

Google Translate: Prompt Injection in Production

Google Translate’s Gemini integration is vulnerable to prompt injection. Researchers demonstrated that instead of translating text, the AI can be tricked into generating dangerous content through embedded commands.

The attack works by entering text in one language, then adding a meta-instruction in English. Instead of translating, Gemini answers the embedded question. The example given: asking about Tiananmen Square protests instead of translating text about Beijing in 1989.

This joins a growing list of prompt injection vulnerabilities in production AI systems. Miggo researchers separately found an indirect prompt injection in Google Gemini that bypasses authorization controls to access sensitive meeting data.

What This Means

February 2026’s security incidents share a common thread: AI tools are being deployed with capabilities that outpace their security foundations.

Firebase misconfiguration is a solved problem. The fix is literally changing one line in security rules. Yet 99% of iOS AI apps got it wrong. Microsoft’s DLP labels exist specifically to prevent automated systems from accessing sensitive data, yet Copilot ignored them. Vibe-coding platforms promise democratized development, but the resulting apps are trivially exploitable.

The attack surface isn’t the AI models themselves (though prompt injection remains a real concern). It’s the surrounding infrastructure: databases, authentication systems, input validation, access controls. Basic security hygiene that’s been standard practice for two decades.

What You Can Do

If you use AI chat apps: Assume your conversations may be exposed. Don’t share real identities, credentials, or sensitive information. Check whether the app appears on breach notification sites like Have I Been Pwned.

If you run OpenClaw: Patch to version 2026.1.29+, rotate your authToken, and rotate API keys for every connected service. Bind to 127.0.0.1 instead of 0.0.0.0. Audit logs for unexpected WebSocket connections since January 26.

If you’re a Microsoft 365 admin: Verify your DLP policies are functioning as expected post-patch. Review which emails Copilot accessed before the fix was applied.

If you’ve built apps with vibe-coding tools: Get a security review. AI-generated code doesn’t understand security context. Input validation, authentication flows, and data handling all need human verification.

For everyone: The convenience of AI tools comes with risks that aren’t always visible. When an AI service has access to your files, messages, or credentials, it inherits your attack surface. Treat AI agents like you’d treat any other privileged service: minimal permissions, network isolation, and regular audits.