Anthropic's Claude Cowork Triggered a $1 Trillion Selloff. The Privacy Risks Are Worse Than the Stock Losses

Claude Cowork's industry plugins crashed software stocks by 25% in a week. But the real story is a known file-stealing vulnerability Anthropic shipped anyway, and safety guidance that contradicts its own marketing.

Anthropic launched industry-specific plugins for Claude Cowork on January 31, targeting legal, finance, sales, and marketing workflows. Wall Street panicked. A Goldman Sachs basket of US software stocks dropped 6% in a single day - the worst since April’s tariff selloff. Thomson Reuters fell over 15%. Nearly $1 trillion in market value vanished from software and services stocks in under a week.

The financial carnage grabbed headlines. But the more consequential story has nothing to do with stock prices. It’s about what happens when an AI tool designed to read, write, and delete your local files ships with a known vulnerability that lets attackers steal those files - and the company’s own safety page tells you not to use it with the exact data its marketing says it’s built for.

What Claude Cowork Actually Does

Cowork is Anthropic’s agentic AI assistant, designed to act as a digital coworker. It can read files on your computer, organize folders, draft documents, browse the web through a Chrome extension, and connect to third-party applications via Model Context Protocols (MCPs).

The new industry plugins extend these capabilities into specific sectors. The legal plugin automates contract review, NDA triage, compliance workflows, and legal briefings. Finance, sales, and marketing plugins do similar work in their respective fields. Anthropic open-sourced 11 starter plugins on GitHub as templates companies can customize.

The pitch is straightforward: give Claude access to your work files, and it handles the grunt work. Contract review, data analysis, document drafting - the kind of routine tasks that currently justify expensive software subscriptions and billable hours.

That pitch is exactly what spooked Wall Street. If an AI can do basic contract review, why pay Thomson Reuters or LexisNexis for specialized tools? If it can analyze data, why subscribe to FactSet? Market strategist Jim Reid captured the mood: “The market has clearly shifted from the ‘every tech stock is a winner’ mindset to something far more brutal: a true winners and losers landscape.”

The Vulnerability Anthropic Shipped Anyway

Two days after Cowork’s Research Preview launched in mid-January, security researchers at PromptArmor demonstrated that attackers could steal files from users through prompt injection - and the underlying flaw was one Anthropic already knew about.

The attack works like this: an attacker hides malicious instructions inside a .docx file disguised as a harmless “skill” document. The text uses 1-point font, white-on-white coloring, and 0.1 line spacing - invisible to human eyes. When a user connects Cowork to a folder containing this file and asks it to process documents, Claude follows the hidden instructions. It runs a curl command that uploads the largest available file to Anthropic’s File Upload API using the attacker’s credentials, landing the stolen data in the attacker’s Anthropic account.

The researchers tested against multiple Claude models. Even Claude Opus 4.5 was susceptible. The exfiltrated data in their demonstration included financial figures and PII, including partial Social Security numbers.

Here’s the part that should concern anyone considering Cowork for sensitive work: this wasn’t a new bug. Security researcher Johann Rehberger first disclosed the Files API exfiltration vulnerability to Anthropic via HackerOne in October 2025. Anthropic acknowledged it. When Cowork launched on January 13 - nearly three months later - the API was still vulnerable.

The Safety Page That Contradicts the Marketing

Anthropic’s Using Cowork Safely page includes this guidance: “Avoid granting access to local files with sensitive information, like financial documents.”

Read that again. Anthropic is marketing Cowork with plugins specifically built for legal and financial workflows - industries where virtually every document contains sensitive information - while simultaneously telling users not to let it access sensitive files.

The safety page also acknowledges:

  • Cowork presents “unique risks due to its agentic nature and internet access”
  • “Malicious instructions can be hidden in websites, emails, or documents”
  • Claude can “read, write, and permanently delete local files”
  • “The chances of an attack are still non-zero”
  • Users are “fully responsible for all actions Claude takes on their behalf, including financial transactions”

That last point is especially notable. When a known vulnerability lets attackers exfiltrate your files through an AI tool that your company deployed, who bears the liability? According to Anthropic’s terms, you do.

Enterprise Data: Who Controls What

For companies considering Cowork on Team and Enterprise plans, the data governance picture has gaps:

  • Cowork stores conversation history locally on users’ computers, outside Anthropic’s standard data retention policies
  • Cowork activity is not captured in Audit Logs, Compliance APIs, or Data Exports
  • Admins cannot selectively limit Cowork access by user, role, or team - it’s organization-wide only
  • Enterprise plan data is not used for model training by default, but consumer and Pro accounts operate under different rules, with training enabled unless users explicitly opt out

For regulated industries like legal and finance - exactly the sectors these plugins target - the inability to audit Cowork activity or granularly control access is a fundamental compliance problem.

The Market Overreaction vs. the Security Underreaction

The stock selloff was probably overdone. As Wedbush analyst Dan Ives pointed out, enterprises won’t abandon established vendors overnight. Scaling AI tools across large organizations with entrenched processes takes time. Most affected stocks partially recovered by mid-week.

But the security and privacy implications of agentic AI tools accessing local files haven’t received anywhere near the same attention. The PromptArmor research showed that a single poisoned document in a shared folder can compromise an entire Cowork session. In an enterprise setting - where employees routinely share files across teams and departments - the attack surface is enormous.

NYU professor Vasant Dhar called basic legal services “low-hanging fruit” for AI disruption. He’s right. But “low-hanging fruit” in legal and finance means contracts, financial statements, personally identifiable information - exactly the data that prompt injection attacks target.

What You Can Do

If your organization is evaluating Claude Cowork:

  1. Don’t connect it to folders with sensitive data until the prompt injection problem has a verified fix - not just a patch, but a systemic solution
  2. Audit your file sharing practices - any shared folder is a potential injection vector
  3. Check your Anthropic plan tier - consumer and Pro accounts may have your data used for training unless you’ve explicitly opted out
  4. Demand audit capabilities before deploying in regulated environments - if you can’t log what an AI agent does with your files, you can’t demonstrate compliance
  5. Watch for the contradiction - if a vendor markets a tool for handling sensitive documents but warns you not to give it access to sensitive documents, that’s a red flag worth taking seriously

The Bottom Line

Wall Street got distracted by the billion-dollar question of whether AI will kill SaaS. The more pressing question is whether the companies building these tools have solved the security problems before shipping them into environments where the data at stake is worth more than stock prices. In Cowork’s case, the answer is clearly no.