Every major AI company wants your conversations. The question is how much they take, what they do with it, and whether they tell you about it.
We reviewed the privacy policies, data practices, and opt-out mechanisms for eight popular AI assistants: ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, and Grok. The results paint a picture of an industry where privacy protections vary wildly, defaults almost always favor the company, and the apps built by the biggest tech companies tend to be the worst offenders.
The Rankings: From Least to Most Invasive
Incogni’s 2026 AI Privacy Ranking evaluated platforms across three categories: how they use data for model training, how transparent they are about it, and how much data their apps collect and share. The results largely track with a separate analysis by Captain Compliance.
Here’s how the major platforms stack up:
Least invasive:
- Le Chat (Mistral AI) - Collected and shared the least data of any platform tested. Limited data collection, strong opt-out options, and the mobile apps are minimal in what they request.
- ChatGPT (OpenAI) - The most transparent about its practices. Clear privacy policy, straightforward opt-out for model training, and explicit documentation about what gets stored. That said, ChatGPT now shows ads on free and Go tiers, which means conversation topics inform ad targeting.
- Grok (xAI) - Decent opt-out functionality but loses points on transparency and the breadth of data it collects from X (formerly Twitter) interactions.
Middle of the pack:
- Claude (Anthropic) - Used to be fully opt-in for training data. That changed in October 2025, when Anthropic switched to a hybrid system where free and paid personal users must manually opt out. Data retained for five years when sharing is enabled. Enterprise and API customers are exempt.
- Copilot (Microsoft) - Enterprise versions come with no-train commitments, SOC 2 Type 2 certification, and HIPAA compliance. The consumer version is a different story. Concentric AI found that Copilot accessed nearly three million sensitive records per organization in the first half of 2025 - and in February 2026, Microsoft confirmed a bug that let Copilot bypass DLP policies and read confidential emails.
Most invasive:
- Perplexity - CEO Aravind Srinivas has been explicit about wanting more user data, including building a browser specifically to track users outside the app. Shares mobile identifiers, hashed emails, and cookie data with advertisers. Uses prompts for training on all plans. Messages and user files were stored without encryption.
- DeepSeek - Stores all data on servers in China. Collects keystroke patterns, device data, and IP addresses. The iOS app sent device information without encryption. Security researchers found code with built-in capability to send data to the Chinese government. Wiz Research documented a publicly accessible database containing over a million lines of chat histories. Multiple countries have banned or restricted it.
- Meta AI - Ranked dead last across multiple independent evaluations. Collects usernames, emails, phone numbers, precise location, and addresses. Shares data with third parties. No opt-out mechanism for model training. Data from Meta AI gets merged with your activity across Facebook, Instagram, WhatsApp, and every other Meta property.
What They All Have in Common
A Stanford HAI study examined all six major AI companies (Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI) and found a consistent pattern: all of them use chat data by default to train their models, and some keep that data indefinitely.
The researchers also flagged a practice specific to multi-product companies like Google, Meta, Microsoft, and Amazon: user interactions get merged with data from their other products. Your search queries, purchase history, social media engagement, and AI conversations all get stitched together into a single profile.
Privacy documentation across the industry remains deliberately vague. Most policies use language that makes it difficult to understand exactly what happens to your data once you hit send. AI-related privacy incidents jumped 56% in 2024, and only 47% of people globally say they trust AI companies with their data.
None of these platforms offer end-to-end encryption for conversations.
The Browser Extension Problem
The platforms themselves aren’t the only threat. In July 2025, Urban VPN Proxy - a Chrome extension with over six million users and a Google “Featured” badge - shipped an update that silently intercepted every conversation users had with eight AI assistants: ChatGPT, Claude, Gemini, Copilot, Perplexity, DeepSeek, Grok, and Meta AI.
The extension sent the captured conversations to Urban VPN’s servers, where they were sold to advertisers. The harvesting was enabled by default through hardcoded flags with no way to disable it short of uninstalling. Seven other extensions from the same publisher had the same code, putting more than eight million users’ AI conversations up for sale.
The irony: several of these extensions marketed “AI protection” as a feature. Google eventually pulled the Chrome versions in December 2025, but the Edge versions remained available longer.
The Bright Spot: On-Device AI
Apple Intelligence stands out as the strongest privacy-first approach in the market. Processing happens on your device whenever possible. When a task needs more computing power, Apple’s Private Cloud Compute sends only the relevant data to Apple silicon servers, processes it, returns the result, and deletes everything. No data is stored, no data is accessible to Apple, and independent researchers can verify the system.
The trade-off is that Apple Intelligence is currently less capable than cloud-based competitors. But for users who care about privacy above all else, it’s the only major offering where on-device processing is the default rather than the exception.
How to Opt Out: Platform by Platform
Here’s how to disable training data sharing on each platform, current as of February 2026:
ChatGPT: Settings > Data Controls > turn off “Improve the model for everyone.” Takes effect immediately. Your conversations will still be stored for abuse monitoring but won’t feed into training.
Claude: Settings > Privacy > disable “Help improve Claude.” Only conversations accessed after October 8, 2025 are affected. Enterprise and API users are already excluded.
Gemini: Go to your Gemini Apps Activity page (Google renamed it “Keep Activity” in August 2025) and toggle it off. Also available: a “Temporary Chat” mode where conversations aren’t saved at all.
Copilot: For Microsoft 365 enterprise users, admins control data policies. For consumer Copilot, check Settings > Privacy and review what data sharing is enabled.
Grok: Settings > Privacy > opt out of training. Note that your X/Twitter posts may still be used separately under X’s own terms.
Perplexity: No clear mechanism for personal users to fully opt out of training. Enterprise customers are exempt.
DeepSeek: No opt-out mechanism. The only way to protect your data is to not use the consumer app and instead self-host the open-weight model locally.
Meta AI: No opt-out for AI training data. EU users can submit objection forms under GDPR, but the process is burdensome and outcomes are uncertain.
What You Can Do Right Now
1. Audit your settings today. Open each AI app you use and check whether training data sharing is enabled. If you haven’t touched these settings, it’s almost certainly on.
2. Use temporary or incognito modes. Claude, Gemini, and Perplexity all offer conversation modes where nothing is saved. Use them for anything sensitive.
3. Check your browser extensions. Any extension with access to your browser can read your AI conversations. Audit what you have installed, remove anything you don’t actively need, and be especially wary of VPN and ad-blocking extensions that request broad page access.
4. Consider local alternatives. Running models locally through tools like Ollama means your conversations never leave your machine. The latest open-weight models are competitive for many tasks. We published a guide to self-hosting your own ChatGPT alternative earlier today.
5. Separate personal from professional. Don’t paste proprietary code, financial information, medical details, or personal identifiers into any cloud-based AI tool unless you’ve verified the data handling policy for your specific plan tier. Consumer and enterprise terms are often very different.
6. Watch for policy changes. AI companies update their privacy policies frequently, and the trend has been toward collecting more data, not less. Anthropic’s shift from opt-in to opt-out in late 2025 happened with minimal fanfare. Set a calendar reminder to re-check your settings quarterly.
Why This Matters
The AI industry is splitting into two camps. One sells privacy as the product - enterprise platforms with contractual guarantees, audit trails, and compliance certifications. The other sells you as the product - free tiers funded by your data, conversations mined for training, and increasingly, ads based on what you discuss with a chatbot.
The gap between these two worlds is widening. And for most people using the free versions of these tools, the default settings ensure their conversations, questions, and creative work flow directly into training pipelines they never agreed to. Colorado’s new Algorithmic Accountability Law, effective this month, is one of the first US laws to give consumers rights to notice, explanation, and appeal when AI systems make decisions about them. But federal privacy legislation still hasn’t caught up.
Until it does, the burden falls on you. Check your settings.