Something unusual happened in the app stores this weekend: ChatGPT users started leaving en masse. Not because of a bug, not because of a pricing change, but because of where their AI assistant’s parent company chose to draw ethical lines - or rather, where it chose not to.
ChatGPT uninstalls surged 295% day-over-day on Saturday, February 28, according to TechCrunch. That’s compared to a typical day-over-day uninstall rate of 9%. Meanwhile, Claude downloads jumped 51% and the app hit No. 1 on the U.S. App Store, where it remains as of Monday. One-star reviews for ChatGPT surged 775% on Saturday, then grew another 100% on Sunday.
The trigger: OpenAI’s Pentagon deal, announced just hours after the Trump administration blacklisted competitor Anthropic for refusing similar terms.
What OpenAI Agreed To
OpenAI secured a deal worth up to $200 million with the Department of Defense, which the company claims includes the same two restrictions Anthropic had been fighting for: no mass domestic surveillance and no fully autonomous weapons. CEO Sam Altman himself admitted the deal was “definitely rushed” and acknowledged that “the optics don’t look good.”
The devil, as always, is in the details. The contract language permits use for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” Legal experts have already flagged this as fundamentally different from what Anthropic demanded.
“The published excerpt does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” Jessica Tillipman at George Washington University’s law school told reporters. It simply states that the Pentagon can’t use OpenAI’s tech to break laws as they’re currently written - laws that can change.
OpenAI’s contract explicitly references surveillance and autonomous weapons laws “as they exist today,” freezing those standards. But that’s not the same as having veto power over uses you find objectionable. The company accepted the “any lawful use” language that Anthropic walked away from.
What Anthropic Refused
Anthropic CEO Dario Amodei rejected what the Pentagon called its “best and final offer,” stating publicly: “We cannot in good conscience accede to their request.”
The company’s red lines were explicit: no use of Claude for mass domestic surveillance, no fully autonomous weapons that fire without human involvement. When the Pentagon demanded unrestricted military access, Anthropic said no.
For that refusal, the government designated Anthropic a “supply chain risk” - a classification typically reserved for companies from adversarial nations like Huawei. The Pentagon ordered federal agencies and military contractors to certify they don’t use Claude in their workflows. Anthropic is challenging the designation in court.
Why This Matters for Regular Users
You might wonder why a government contract dispute would drive consumer behavior. The answer lies in what these deals reveal about how companies view their users’ data.
The “any lawful use” language that OpenAI accepted creates a framework where user interactions could theoretically inform military applications. While the specific contract may have carve-outs, the precedent concerns privacy advocates: a company willing to give the government “any lawful use” of its technology today may find those uses expanding tomorrow as laws change.
Lucas Hansen, co-founder of CivAI, explained to DefenseScoop that Claude’s guardrails are “fundamental” to the model’s training - not removable features. Creating a version without those safeguards would require entirely separate training, making dual versions economically unfeasible. This means the version consumers use has the same protections as the one Anthropic refused to compromise for the military.
For OpenAI, the architecture is different. The consumer ChatGPT and the government version share underlying systems, and the flexibility that made the Pentagon deal possible is the same flexibility that worries users about where their conversations might end up.
The Contradictions
The situation exposes a fundamental contradiction in the Pentagon’s position: officials simultaneously claim Claude is essential to national security while threatening to blacklist it for maintaining safety guardrails.
A former senior defense official noted that designating Anthropic as a supply chain risk would force Palantir to remove Anthropic-supplied elements from the Maven Smart System. Every DOD vendor would need to verify non-use of Claude, which reportedly supports the majority of government coders. The official warned this exercise “would not end well.”
There’s also the inconvenient fact that Anthropic already has a partnership with Palantir and AWS providing US intelligence agencies access to Claude - with safeguards in place. The current dispute isn’t about whether AI should support national security; it’s about whether companies can maintain ethical red lines while doing so.
The User Response
The exodus isn’t just happening in aggregate statistics. Pop star Katy Perry shared Claude’s pricing page on X, highlighting the $20/month Pro subscription. Reddit user Adam Lyttle posted his Anthropic invoice alongside his OpenAI cancellation confirmation. The ChatGPT subreddit filled with users urging account deletions.
The #CancelChatGPT hashtag spread across platforms, with users posting guides for deleting ChatGPT accounts and migrating to Claude. Anthropic moved quickly to capitalize on the moment, releasing a feature that lets users import their memories from ChatGPT into Claude, reducing switching costs.
Since the start of the year, free active users on Claude have increased by over 60%, and daily sign-ups have quadrupled.
Over 12,000 Tech Workers Push Back
The backlash has organized. Over 12,000 tech workers signed an open letter urging the Department of Defense and Congress to withdraw Anthropic’s “supply chain risk” designation, calling it an overreach that punishes the company for maintaining ethical boundaries.
Legal expert Amos Toh from the Brennan Center emphasized that Anthropic’s restrictions reflect constitutional obligations - the limitations ensure DOD compliance with constitutional protections against mass surveillance of Americans and international law regarding autonomous weapons.
Daniel Castro from the Information Technology and Innovation Foundation warned that using extraordinary authorities to pressure firms into abandoning safeguards “could send a chilling signal across the broader tech ecosystem.” Companies may conclude that working with the military requires surrendering independent safeguards.
A national survey found 50% of respondents view penalizing Anthropic as government overreach setting a dangerous precedent.
What This Means for You
If you’re a ChatGPT user who hasn’t given much thought to where your AI company’s loyalties lie, the Pentagon deal is a clarifying moment. OpenAI accepted terms that Anthropic refused, and the market is responding.
The question isn’t whether AI should ever be used for defense - that ship sailed long ago. The question is whether companies will maintain meaningful limits on how their technology is used, or whether “any lawful purpose” becomes the default.
For users who care about privacy, the calculus is straightforward: one company said no and faces government retaliation for it. The other company said yes and is now watching its user base walk out the door.
The Bottom Line
The 295% surge in ChatGPT uninstalls represents something new: AI users voting with their feet on questions of ethics and privacy. Whether this becomes a lasting shift in the market or a momentary blip depends on whether users maintain their attention span beyond the news cycle - and whether other companies are watching what happens to those who draw red lines versus those who don’t.