Microsoft charges $30 a month for Copilot Pro. It also says, in bold capital letters in its own terms of service, that Copilot is “for entertainment purposes only” and that you shouldn’t “rely on Copilot for important advice.” Meanwhile, LinkedIn is running hidden scripts that scan your browser for over 6,000 extensions, and Google flipped on Gemini’s access to 130 million Gmail accounts without asking.
This is where AI privacy stands in April 2026. The companies building these tools don’t trust their own products — but they absolutely trust themselves with your data.
Microsoft: “Entertainment Only” (But Pay Us $30)
The terms of service for Microsoft Copilot contain a section labeled “IMPORTANT DISCLOSURES & WARNINGS” in bold capitals. It reads: “Copilot is for entertainment purposes only.” The clause goes on to warn users not to rely on Copilot for important advice.
This language was updated in October 2025 but went largely unnoticed until early April 2026, when it surfaced on social media and went viral. The contradiction is hard to miss: Microsoft has spent billions integrating Copilot into Windows 11, Office, and its enterprise tools, marketing it as a productivity revolution. It charges consumers up to $30 per month.
A Microsoft spokesperson called the language “legacy phrasing” from Copilot’s origins as a Bing Chat companion, promising it “will be altered with our next update.”
The privacy angle matters here. The “entertainment only” clause applies to consumer Copilot products — the same ones that collect your prompts, track usage patterns, and feed data through Microsoft’s infrastructure. Enterprise customers get different terms and different protections. If you’re paying as an individual, Microsoft’s own legal documents say the tool isn’t meant for serious use, while the tool itself keeps collecting everything you type into it.
Meanwhile, Microsoft has quietly begun scaling back Copilot branding across Windows 11. The Recall feature — which captured screenshots of your screen every few seconds — was demoted from default-on to opt-in after a wave of privacy backlash. The taskbar Copilot button now defaults to off. Copilot branding is being stripped from Paint and Notepad. It’s a rare retreat, and it tells you something about how far things had gone.
LinkedIn’s BrowserGate: 6,000 Extensions Scanned Without Consent
An investigation by Fairlinked e.V., a European association of LinkedIn professionals, found that LinkedIn injects a 2.7-megabyte JavaScript bundle into its website that silently scans visitors’ browsers for over 6,000 Chrome extensions. The script also assembles a detailed fingerprint of your hardware — CPU cores, memory, screen resolution, language settings, time zone, battery status — encrypts the package, and sends it to LinkedIn’s servers.
The scale of this operation has been growing exponentially. LinkedIn tracked 38 extensions in 2017. By 2024, that number hit 461. By February 2026: 6,167 extensions. That’s a 1,252% increase in two years.
What makes this particularly invasive is what the data reveals. Researchers found that by mapping which extensions you use, LinkedIn can infer your religious beliefs, political views, and whether you’re neurodivergent. The platform also scans over 500 job-seeking tools — which means if you’re quietly looking for a new position, your current employer could potentially learn about it through LinkedIn’s own systems.
LinkedIn’s response: the scanning “prevents fraud and scraping while maintaining platform stability.” The practice is not disclosed in LinkedIn’s privacy policy.
Google’s Silent Switch: Gemini Activated on 130 Million Accounts
In November 2025, Thomas Thele filed a class action lawsuit alleging that Google activated Gemini’s “Smart Features” across Gmail, Chat, and Meet accounts without user consent. The lawsuit, filed in the Northern District of California, claims Google flipped the switch on or around October 10, 2025, enabling its AI to analyze private communications for an estimated 130 million American Gmail users.
Thele says he never turned the setting on, was never notified of the change, and never consented. The lawsuit alleges violations of the California Invasion of Privacy Act, the California Computer Data Access and Fraud Act, the Stored Communications Act, and California’s constitutional right to privacy.
Google has also rolled out what it calls a “Personal Intelligence” dashboard — a centralized interface where Gemini can access data across Gmail, Drive, Maps, and Calendar. The convenience framing is intentional: by grouping all permissions under one roof, Google makes cross-service data access feel like a feature rather than a surveillance upgrade.
The case (Thele v. Google LLC, Case No. 5:25-cv-09704) is in early stages and could take years to resolve. But the underlying allegation — that a company remotely activated AI analysis of private communications without asking — sets a troubling precedent regardless of the legal outcome.
The Privacy Scorecard: Nobody Passes
An independent privacy audit by Terms.law, published in January 2026, graded the major AI chatbots on data protection. The results are grim:
OpenAI (ChatGPT): 48/100 — Grade D
- Data Collection Scope: 35/100
- Third-Party Sharing: 40/100
- Retention & Deletion: 35/100
- User Control & Consent: 42/100
ChatGPT trains on your conversations by default. The opt-out toggle is buried in settings. Even with chat history turned “off,” OpenAI retains conversations for 30 days. Deleted data may persist in model weights with no way to extract it. And a federal court has ordered OpenAI to preserve training data in the New York Times copyright case, which may block deletion requests entirely.
Anthropic (Claude): 65/100 — Grade C
- Data Collection Scope: 55/100
- Third-Party Sharing: 58/100
- Retention & Deletion: 50/100
- User Control & Consent: 55/100
The highest score in the AI category, but a C grade still means significant privacy gaps. Anthropic’s training opt-out is more accessible than competitors’, but conversations flagged as helpful may still be retained for model improvement.
Meta AI: No independent opt-out
Meta confirmed it trains Llama models on public Facebook and Instagram posts. In Europe, regulators forced a pause and an opt-out mechanism. In the United States, there is no general opt-out. Starting December 2025, Meta began using conversation data to personalize ads.
None of these companies earned above a C.
Data Brokers, Driving Data, and the Expanding Pipeline
The AI privacy problem extends beyond chatbots. Texas Attorney General Ken Paxton sued Allstate and its subsidiary Arity for building what the complaint calls “the world’s largest driving behavior database” by secretly embedding tracking software in popular apps like Life360 and GasBuddy. The collected data — trillions of miles of location data from over 45 million Americans — was then sold to insurance companies to justify higher premiums.
This is the supply chain that feeds AI: data collected through one app, processed by a broker, sold to a third party, and used to make decisions about you. The AI chatbot on your phone is just the most visible node in a much larger network.
The EFF has called on AI chatbot companies to protect conversations from bulk surveillance, noting that people share everything from medical questions to political opinions in chatbot conversations — sensitive data that reveals far more than a typical web search.
The Bright Spots (Sort Of)
Not everything is trending worse. Apple updated its App Store guidelines to require explicit disclosure when apps share data with third-party AI systems. Guideline 5.1.2(i) now demands that developers name the specific AI provider — no more hiding behind vague “service provider” language. Apps that don’t comply face removal.
Microsoft’s Copilot retreat in Windows 11 shows that user backlash works. Recall is now opt-in. The Copilot button defaults to off. It took public outrage to get there, but the changes are real.
And the Texas lawsuit against Allstate marks the first time a state attorney general has enforced a comprehensive data privacy law against a data broker — a signal that regulatory patience may be running out.
What You Can Do
Right now:
- ChatGPT: Settings > Data Controls > “Improve the model for everyone” > Turn OFF. Use Temporary Chat for sensitive conversations.
- Google Gemini: Check the Personal Intelligence dashboard. Disable cross-app access you didn’t authorize. Set activity auto-delete to the shortest period.
- Microsoft Copilot: In Windows 11, check Settings > Privacy. Disable Recall if enabled. Turn off the Copilot taskbar button. Consider whether a $30/month “entertainment” tool deserves access to your documents.
- LinkedIn: Use a privacy-focused browser or browser profile for LinkedIn. Consider extensions like uBlock Origin that can block tracking scripts. Review your LinkedIn privacy settings under Settings > Data Privacy.
- Meta AI: If you’re in the EU, submit an opt-out request. In the US, your options are limited — consider using Meta’s platforms less, or not at all, for anything you wouldn’t want fed to an ad targeting model.
Longer term:
- Support organizations like the EFF that fight for AI privacy rights
- Back state-level privacy legislation — it’s working faster than federal action
- Use local AI alternatives where possible for sensitive tasks (Ollama, local Whisper, self-hosted RAG)
- Treat every AI chatbot conversation as potentially permanent and public
The pattern is clear: AI companies collect as much as they can by default, bury the opt-out in settings, and rely on most people never checking. The legal terms tell you everything about how seriously they take your data — and how seriously they don’t take their own products.