Every time you chat with an AI assistant, you’re handing over more than just your questions. You’re giving away your thought patterns, your concerns, your business ideas, and sometimes highly sensitive information you wouldn’t share with a stranger.
The four major AI assistants - ChatGPT, Gemini, Copilot, and Claude - all handle your data differently, and all of them have made significant changes in 2026. Here’s exactly what each one collects, how long they keep it, and what you can do about it.
The Quick Version
| Service | Trains on Your Data By Default? | Retention Period | Human Review? |
|---|---|---|---|
| ChatGPT | Yes (Free/Go plans) | 30 days | Yes |
| Gemini | Yes | 18 months (configurable) | Yes, up to 3 years |
| Copilot | Varies by plan | Varies | Enterprise: No |
| Claude | Yes (since Sept 2025) | 30 days (5 years if training enabled) | Limited |
ChatGPT: Now With Ads
OpenAI’s February 2026 privacy policy updates brought some notable changes, and they’re a mixed bag.
What they collect:
- Account information (name, email, payment details)
- Device identifiers and IP address
- Your prompts and all uploaded files
- Usage patterns and interaction data
What changed in 2026:
- Free and Go tier users now see personalized ads
- Contact syncing option to find other OpenAI users
- Clearer data control documentation
The ads use your ChatGPT activity to personalize what you see, but OpenAI says your conversations aren’t shared with advertisers. Plus, Pro, Enterprise, Business, and Education plans remain ad-free.
The good news: OpenAI doesn’t use your browsing data for model training by default.
How to Opt Out of ChatGPT Training
Desktop:
- Click your profile icon
- Select Settings
- Go to Data Controls
- Toggle off “Improve the Model for Everyone”
Mobile:
- Tap Menu
- Select your account name
- Go to Data Controls
- Toggle off “Improve the Model for Everyone”
Alternative: Use Temporary Chat mode (top-right corner when starting a new chat). These conversations aren’t saved or used for training.
Gemini: The 72-Hour Catch
Google’s Gemini has some of the most complex data practices of the bunch, and a few catches that aren’t immediately obvious.
What they collect:
- Your prompts and conversations
- Your Google account activity (if Personal Intelligence is enabled)
- Device and location data
- Voice data if you use voice features
Key changes in 2026:
- “Gemini Apps Activity” renamed to “Keep Activity”
- Personal Intelligence feature launched January 2026 - can access your Gmail, Photos, YouTube history, and Search activity
- Temporary Chat feature added
The catch: Even with activity tracking disabled, Google retains your data for up to 72 hours for “stability and abuse detection.” And here’s the worse part: conversations reviewed by human annotators are retained for up to three years, even if you delete your activity.
How to Opt Out of Gemini Training
Desktop:
- Open Gemini
- Click the hamburger menu
- Select Activity
- Click “Turn Off”
- Choose “Turn Off and Delete Activity”
- Click Next, then Delete
Mobile:
- Tap your profile icon
- Select “Gemini Apps Activity”
- Tap “Turn Off”
- Choose “Turn Off and Delete Activity”
- Tap Next, then Delete
Personal Intelligence: This is off by default. If you enabled it, go to Gemini Settings to disable access to your Gmail and other Google services.
Copilot: The Privacy Breach That Shouldn’t Have Happened
Microsoft Copilot made headlines this month for all the wrong reasons. A bug allowed Copilot to access and summarize confidential emails that users had marked as restricted under data loss prevention (DLP) policies.
The bug affected Copilot’s “work tab” chat feature, which could read and outline email contents - including those labeled “confidential” - since January 2026. Microsoft has since patched the issue, but it exposed a fundamental risk: AI assistants with broad access to your data can bypass security controls in unexpected ways.
What they collect:
- Your prompts and AI responses
- Files you reference or upload
- Activity within Microsoft 365 apps
Enterprise vs Consumer: For enterprise users, Microsoft doesn’t use your data to train foundation models, and data stays within your Microsoft 365 tenant. Consumer Copilot has different rules.
How to Opt Out of Copilot Training
Desktop:
- Click your profile icon
- Select your account name
- Click Privacy
- Toggle off “Model Training on Text”
- Toggle off “Model Training on Voice”
U.S. House of Representatives: They banned congressional staff from using Copilot earlier this year due to data security concerns. If Congress doesn’t trust it with their data, consider what that means for yours.
Claude: The Default That Flipped
Anthropic made a significant change in August 2025 that many users missed: Claude now trains on your conversations by default for Free, Pro, and Max plans.
Before this change, Claude was one of the more privacy-friendly options - your data was deleted within 30 days and wasn’t used for training. Now, if you allow training, your data can be retained for up to five years.
What changed:
- Training on user data is now opt-out, not opt-in
- Data retention extended from 30 days to 5 years for training-enabled accounts
- Only applies to consumer plans - API, Claude for Work, Education, and Gov aren’t affected
The silver lining: Anthropic says they de-link your data from your user ID before using it for training and apply filters to remove sensitive information.
How to Opt Out of Claude Training
- Go to claude.ai
- Click your profile icon
- Select Settings
- Click Privacy Settings
- Toggle off data sharing for model training
If you created your account before September 28, 2025, you should have seen a pop-up asking about your preference. If you clicked through without reading (no judgment), check your settings now.
What This Actually Means for You
The pattern is clear: AI companies are moving toward using consumer data for training, while enterprise customers pay for privacy.
The two-tier system:
- Consumers using free or individual paid plans get their conversations used for training, reviewed by humans, and retained for years
- Enterprise customers get contractual privacy guarantees, shorter retention, and no training on their data
What you’re trading: When you use these services without opting out, you’re potentially contributing:
- Your writing style and vocabulary
- Your business ideas and strategies
- Your personal concerns and health questions
- Your code and technical approaches
All of this becomes training data that helps these companies build better products - products they’ll sell back to you.
Practical Steps to Protect Yourself
-
Audit your settings now. Go through each AI service you use and verify your privacy settings. Defaults change.
-
Use temporary/incognito modes. ChatGPT, Gemini, and Claude all offer temporary chat modes where conversations aren’t saved.
-
Don’t share sensitive data. No AI service is truly private. Treat every conversation as potentially public.
-
Consider the enterprise option. If you’re using AI for business, paid enterprise plans often come with actual privacy guarantees and data processing agreements.
-
Check settings regularly. All four companies have changed their policies in the past six months. They’ll change them again.
The Bottom Line
None of these AI assistants are truly private at the consumer level. All of them collect substantial data about you, and three of the four now use that data for training by default.
The question isn’t whether to use them - they’re genuinely useful tools. The question is whether you’re making informed decisions about what you share and whether you’ve taken the basic steps to limit data collection.
Take 10 minutes this week to review your settings across all the AI tools you use. Your future self will thank you.