A week ago, we dug into how Microsoft, LinkedIn, and Google were quietly reshaping their AI privacy policies. This week brings a fresh batch of developments that make that roundup look almost quaint: ChatGPT’s data collection has ballooned by 70%, Anthropic wants your government ID to use Claude, and GitHub is about to start training on your Copilot interactions unless you opt out by April 24.
Here’s what changed, what it means, and what you can do about it.
ChatGPT’s Data Appetite Jumped 70% in One Year
A 2026 Surfshark analysis of AI chatbot privacy practices found that OpenAI’s ChatGPT now collects 17 out of 35 possible data categories — up from 10 last year. That’s a 70% increase in the types of data the app hoovers up from your device.
The new categories include coarse location, health and fitness data, search history, audio data, advertising data, and customer support interactions. Most of this (14 categories) is framed as “app functionality,” but Surfshark found the data also feeds analytics (7 categories), product personalization (4), OpenAI’s own marketing (3), and third-party advertising (2).
ChatGPT isn’t the worst offender, though. That distinction belongs to Meta AI, which collects a staggering 33 out of 35 possible data types — nearly 95% of everything it could possibly collect. Meta AI remains the only chatbot that collects financial information, and it also gathers sensitive data including racial and ethnic information, sexual orientation, religious beliefs, and biometric data.
Google Gemini sits between them at 23 data types, including contact information and precise location. On the lighter end, both Claude and DeepSeek collect 13 data types each.
The industry trend line is clear: 70% of AI chatbots now collect user location data, up from 40% a year ago.
Anthropic Wants Your Passport to Use Claude
In a move that caught many users off guard, Anthropic quietly published identity verification requirements for Claude this week. Certain users will now need to hand over a government-issued photo ID — a passport, driver’s license, or national ID card — plus a live selfie to continue accessing the platform.
No other major AI chatbot requires this. Not ChatGPT. Not Gemini. Not Copilot.
Anthropic says verification kicks in when accessing “certain capabilities,” during “routine platform integrity checks,” or for safety and compliance purposes. The company partnered with Persona, a third-party identity verification service, to handle the process. Verification data goes to Persona’s servers, not Anthropic’s, and the company says it won’t be used for model training.
The irony is hard to miss. Many privacy-conscious users switched to Claude specifically because Anthropic positioned itself as the more trustworthy option. Now those users are being asked to hand over their most sensitive identification documents to a third-party service.
To complicate things further, Anthropic has also been flagging adult users as minors and suspending their accounts, suggesting the verification system may have accuracy issues out of the gate.
GitHub Starts Training on Your Code April 24
If you use GitHub Copilot on a Free, Pro, or Pro+ plan, your interaction data — code snippets, file names, repository structure, navigation patterns — will be used for AI model training starting April 24 unless you opt out.
GitHub announced the policy change on March 25, giving users 30 days to opt out before the default switches. This is a classic dark pattern: opt-out rather than opt-in, with a deadline that many developers won’t see until it’s too late.
There are some protections. If you previously opted out of data collection for “product improvements,” GitHub says your preference carries over. Copilot Business and Enterprise users are exempt. And if your personal account is a member of or collaborator with a paid organization, your interaction data is excluded from training even on personal Copilot plans.
But for the millions of developers on free or personal Pro plans, the clock is ticking.
How to opt out: Go to github.com/settings/copilot/features and disable “Allow GitHub to use my data for AI model training” under the Privacy heading. Do this before April 24.
The Anthropic-Pentagon Saga: Privacy as Political Football
The most consequential AI privacy story of 2026 has been playing out in courtrooms rather than app settings pages.
Anthropic signed a $200 million contract with the Pentagon in 2025 but insisted on guardrails: no mass surveillance of Americans, no fully autonomous weapons. When the DOD demanded unrestricted access to Claude, Anthropic refused.
The Trump administration’s response was swift and punitive. On February 27, federal agencies and military contractors were ordered to halt all business with Anthropic. The DOD designated Anthropic a “supply chain risk” in March, effectively branding an American company a national security threat for disagreeing with the government.
A federal judge blocked those measures in late March, writing that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The EFF made a broader point: privacy protections shouldn’t depend on the ethical backbone of individual companies. Anthropic drew a line this time, but the next company might not.
Meanwhile, OpenAI signed its own Pentagon deal with no such restrictions.
Vercel Joins the Training Queue
Developer platforms are lining up to use your work for AI training. Vercel updated its terms of service in March to default Hobby plan users into AI model training. If you didn’t opt out by March 31, your data may already be in the pipeline.
Paid Pro plans are opted out by default, and Enterprise customers are fully excluded. But for the large population of developers on free tiers, the message is clear: you’re the product.
What You Can Do
The opt-out landscape shifts constantly. Here’s your April 2026 checklist:
ChatGPT: Go to Settings → Data Controls → toggle off “Improve the model for everyone.” Use Temporary Chat for sensitive conversations. Note that this doesn’t stop the 17 categories of device-level data collection.
Claude: There’s no opt-out for the new ID verification requirement — it triggers based on Anthropic’s internal criteria. If you’re uncomfortable with it, consider using the API directly (which doesn’t require ID verification) or self-hosting an alternative model.
GitHub Copilot: Visit github.com/settings/copilot/features and disable AI model training before April 24, 2026.
Meta AI: You can’t opt out of most data collection while using the app. The only real protection is not using it. If you must, avoid sharing financial, health, or personally identifying information.
Vercel: Check your Team Settings → Data Preferences to confirm your opt-out status. Consider upgrading to a paid plan for default exclusion.
General hygiene:
- Use a VPN to limit location data collection
- Review app permissions regularly — revoke microphone, location, and health data access for AI apps that don’t need them
- Use browser-based versions over mobile apps when possible (apps collect significantly more data)
- Consider local alternatives: Ollama for chat, Whisper for transcription, Stable Diffusion for image generation
The Pattern
Every AI company follows the same playbook: launch with minimal data collection to earn trust, then gradually expand what they collect once the user base is locked in. ChatGPT’s 70% increase in data categories happened without fanfare. GitHub waited until Copilot was deeply embedded in developer workflows before switching the training defaults. Meta AI launched already collecting everything.
The companies that collect the least data today won’t necessarily stay that way tomorrow. The only durable protection is structural: using local models when possible, minimizing what you share with cloud services, and staying vigilant about policy changes that land quietly in settings pages and changelog posts.