Two days ago, the FTC started enforcing the biggest update to children’s online privacy rules in 12 years. Today, GitHub starts training on your code by default. And Google is in court for secretly turning on Gemini to scan your Gmail.
Here’s what changed, who’s exposed, and exactly how to lock down every AI platform you use.
COPPA Gets Teeth — Finally
The Children’s Online Privacy Protection Act hasn’t had a real update since 2013. On April 22, the FTC began enforcing sweeping amendments that directly target AI companies for the first time.
The big changes:
Biometric data is now personal information. Voiceprints, facial templates, and fingerprints all fall under COPPA’s protection. Every voice assistant — Siri, Alexa, Google Assistant, Meta AI — that processes a child’s voice now generates data covered by the rule.
AI training requires separate consent. The FTC’s commentary explicitly states that disclosing a child’s personal information “to train or otherwise develop artificial intelligence technologies” is not integral to a website’s core service and requires separate, verifiable parental consent. No more bundling AI training into a general Terms of Service click-through.
No more indefinite data hoarding. Companies must now maintain written data retention policies with specific time periods. The days of keeping kids’ data forever “just in case” are supposed to be over.
Targeted advertising needs its own opt-in. Operators need separate parental consent specifically for serving targeted ads to children and for sharing children’s data with third parties.
The penalty for violations: up to $51,744 per incident, per day.
Whether the FTC actually enforces this against the big AI companies is another question. But the legal framework is there now, and it makes training AI on children’s data without explicit parental consent a clear violation rather than a gray area.
Google’s Silent Switch
While the FTC was tightening rules around consent, Google apparently went the other direction.
A class action lawsuit filed in November 2025 alleges that on October 10, 2025, Google flipped a switch that enabled Gemini AI across all Gmail, Chat, and Meet accounts — without telling anyone. The “Smart Features” toggle that had been opt-in was allegedly turned on by default, letting the AI scan private communications containing financial records, medical information, and personal conversations.
The plaintiff, Thomas Thele, alleges this violates the California Invasion of Privacy Act, the Stored Communications Act, and California’s constitutional right to privacy. The suit covers every U.S. Google account holder affected after the switch.
Google says no policy changes occurred and that Gmail data isn’t used to train Gemini. But there’s a distinction worth noting: Google acknowledges that Gemini can “scan and process emails for personalization features” like summarizing messages and drafting replies — even if that data doesn’t feed directly into model training. For users who never consented to AI reading their email in the first place, the training question is secondary.
The OECD flagged this as an AI privacy hazard in April 2026, noting the potential for “privacy harms and violations of rights if misused or if data is exposed.”
The case (Thele v. Google LLC, 5:25-cv-09704, N.D. Cal.) is in its early stages with no class certification yet.
How to check your settings: Go to myactivity.google.com/product/gemini, then check your Gmail settings for the “Smart features and personalization” toggle. Turn both off if you didn’t consent to AI processing your email.
GitHub’s Deadline Is Today
Starting April 24, all GitHub Copilot Free, Pro, and Pro+ users will have their interaction data — prompts, code snippets, suggestions, file context — used to train AI models by default. We covered this in our last privacy audit, but today is the actual deadline.
If you haven’t opted out yet: go to GitHub Settings → Copilot → Privacy, and set “Allow GitHub to use my data for AI model training” to Disabled.
The Opt-Out Audit: April 2026 Edition
We checked the default privacy settings across every major AI platform. Here’s where things stand, what’s changed, and exactly where to go to fix it.
ChatGPT — Still Training by Default
Default: Your conversations train OpenAI’s models, even on the $20/month Plus plan.
Nothing has changed here since our last audit. ChatGPT still collects 24 data categories and uses your conversations for model improvement unless you manually opt out. Using ChatGPT without logging in? Your data gets collected regardless.
How to opt out: Settings → Data Controls → “Improve the model for everyone” → Off.
Actually private option: Use Temporary Chat mode (toggle at the top of a new chat). These aren’t saved, aren’t used for training, and are purged within 30 days. Or pay for ChatGPT Team ($30/user/month), which prohibits training by contract.
Claude — Opt-Out by Default, But Read the Fine Print
Default: Conversations are not used for training. Data is purged within 30 days.
Anthropic remains the only major AI company that defaults to not training on your data. If you actively opt in to model improvement, though, your data can be retained for up to five years — a detail buried in the September 2025 policy update.
Where to check: Privacy Settings → “Help Improve Claude” should be off.
The ID situation: As we reported last week, Anthropic now asks some users for government ID verification through Persona. This is for access control, not training — but it means Anthropic (through its vendor) potentially holds a copy of your passport or driver’s license.
Gemini — The Most Complicated
Default: Google retains conversations and may use them to improve AI models. Human reviewers can see your conversations.
Google’s own guidance is telling: they advise users not to enter anything they wouldn’t want a human reviewer to see. Even with Gemini Apps Activity turned off, Google holds conversations for up to 72 hours.
How to opt out: Go to myactivity.google.com/product/gemini → Turn off Gemini Apps Activity. Also check your Gmail and Google Workspace settings for the “Smart features” toggle.
Microsoft Copilot — Two Toggles to Find
Default: Trains on your text and voice data.
How to opt out: Profile icon → Profile name → Privacy → Set both “Model training on text” and “Model training on voice” to Off.
GitHub Copilot — Deadline Day
Default (as of today): Your interaction data trains models.
How to opt out: GitHub Settings → Copilot → Privacy → “Allow GitHub to use my data for AI model training” → Disabled.
The Paid Plan Illusion
One pattern that keeps showing up: paying for an AI subscription does not protect your privacy. ChatGPT Plus ($20/month), Claude Pro ($20/month), and Copilot Pro all still collect and potentially train on your data unless you manually change settings.
The only plans that contractually prohibit training on your data are the business tiers:
- ChatGPT Team: $30/user/month
- Claude Team: $25/user/month
- GitHub Copilot Business: $19/user/month
If you’re using AI for anything sensitive — client work, medical questions, legal research, financial data — the consumer plans aren’t designed to protect you.
What You Can Do Right Now
-
If you have kids using voice assistants, check whether your devices are COPPA-compliant. The FTC can now fine companies $51,744/day for violations, but enforcement depends on complaints.
-
Check your Google account. The Gemini “Smart Features” toggle may be on without your knowledge. Visit myactivity.google.com/product/gemini and your Gmail settings today.
-
Opt out on GitHub before midnight. Today is the deadline for the new Copilot training policy. Settings → Copilot → Privacy.
-
Audit every AI tool you use. Walk through the opt-out steps above. It takes five minutes across all platforms.
-
Consider your plan tier. If you’re sharing anything sensitive with AI chatbots, consumer plans — even paid ones — don’t protect you. Business tiers are the only ones with contractual guarantees.
-
Use temporary/incognito modes. For one-off questions you don’t want stored, ChatGPT’s Temporary Chat and Claude’s default 30-day purge are your friends.
The gap between what AI companies say about privacy and what their defaults actually do keeps widening. Every platform audited here — except Claude — defaults to collecting and using your data. And even Claude’s privacy edge comes with caveats around ID verification and the opt-in retention window.
The COPPA update is a step forward. The Google lawsuit might set precedent for what “consent” actually means in AI. But for now, protecting your privacy in AI still requires you to manually opt out, platform by platform, toggle by toggle.