Meta Is Recording Every Keystroke Its Employees Make to Train AI Agents

Meta's Model Capability Initiative captures mouse movements, keystrokes, and screenshots from employee computers. The goal: build AI agents that can replace the workers generating the training data.

Security cameras mounted on a wall

A memo landed in the inboxes of Meta’s Superintelligence Labs team this week. The company is rolling out software on U.S. employees’ work computers that captures every keystroke, mouse click, cursor movement, and periodic screenshots of their screens. The purpose: training AI agents to do their jobs.

The tool is called the Model Capability Initiative, and according to Reuters’ reporting on the internal memo, it runs across a list of work-related apps and websites. Meta says its AI models still struggle with basic tasks like navigating dropdown menus and using keyboard shortcuts. So the company decided the fastest way to fix that is to watch real humans do it.

What’s Being Collected

The MCI system records four types of data from employee workstations:

  • Keystrokes typed across designated work applications
  • Mouse movements and clicks, including navigation patterns
  • Periodic screenshots capturing on-screen content for context
  • App and website activity across Meta’s internal tools

Meta spokesperson Andy Stone told reporters the data is “intended only for model training and not for employee evaluation,” adding that safeguards protect sensitive information from entering the training pipeline.

There’s been no public information about whether employees can opt out.

The Bigger Picture: Agent Transformation Accelerator

The keystroke logging is part of something larger. Meta CTO Andrew Bosworth sent a separate memo describing a program called Agent Transformation Accelerator — previously known as “AI for Work.” Bosworth’s vision: “agents primarily do the work and our role is to direct, review and help them improve.”

That’s an unusually blunt admission. Most companies talk about AI “augmenting” human workers. Meta is talking about AI replacing them, with humans relegated to supervisors of autonomous agents.

The timing makes this harder to swallow. Meta plans to cut roughly 10% of its workforce starting May 20, and CEO Mark Zuckerberg has committed up to $135 billion in capital expenditure for 2026, much of it directed at AI infrastructure. The employees generating this training data are, in a very real sense, training the systems meant to make them unnecessary.

Meta’s own spokesperson framed it almost casually: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them.”

Not many, at least in the U.S.

Federal law sets a remarkably low bar for workplace monitoring. Employers generally only need to provide basic notice that monitoring occurs — and in many states, not even that. There’s no federal requirement to let employees opt out of keystroke logging.

Yale law professor Ifeoma Ajunwa warned that this practice pushes “white-collar monitoring toward a level of continuous oversight more commonly associated with gig-economy roles.” Warehouse workers and delivery drivers have lived under algorithmic surveillance for years. Now that same scrutiny is coming for software engineers and product managers.

Europe is a different story. York University professor Valerio De Stefano noted that European privacy and labor rules “may restrict or bar this kind of tracking.” In Italy, using electronic monitoring to track employee productivity is explicitly illegal. In Germany, courts have only permitted keystroke logging under exceptional circumstances, like suspected criminal activity.

California has been moving toward stronger protections with proposed legislation like AB 1221, which would establish broad workplace privacy regulations. But nothing has passed yet.

Why This Sets a Dangerous Precedent

Meta’s move normalizes something that should alarm anyone who works at a computer.

First, there’s the consent problem. When your employer installs surveillance software on your work machine, the power imbalance makes genuine consent impossible. You can technically quit, but that’s not a real choice for most people.

Second, there’s the scope creep risk. Meta says the data won’t be used for performance evaluations. But once this infrastructure exists, the temptation to use it for productivity scoring, identifying “low performers,” or justifying layoffs is enormous. As De Stefano pointed out, “mere knowledge of monitoring can tilt workplace leverage toward employers” — workers change their behavior when they know they’re being watched, regardless of how the data is officially used.

Third, there’s the industry contagion effect. If Meta does this without meaningful pushback, every company building AI agents will ask the same question: why aren’t we recording our employees too? The model works. The data is valuable. The legal barriers are minimal. Within a year, this could be standard practice across tech.

What You Can Do

If you work at Meta or a similar company:

  • Document what monitoring tools are installed on your devices
  • Check your employment agreement for data collection clauses
  • If you’re in the EU, you likely have stronger rights — consult with a labor attorney about GDPR protections
  • Talk to colleagues. Collective awareness is the first step toward collective action

If you’re following this from outside:

  • Pay attention to your own employer’s AI policies
  • Support workplace privacy legislation in your state
  • Remember that data collected for one stated purpose has a way of being repurposed

If you’re building AI agents:

  • There are ways to generate training data that don’t involve surveilling your own workforce. Synthetic data, consented user studies, and public datasets all exist. The question isn’t whether you can log every keystroke — it’s whether you should.

The Bottom Line

Meta wants to build AI agents that can do office work without humans. To get there, it’s recording how humans actually do that work — every click, every keystroke, every screen. The employees generating this data have no apparent way to opt out, and the legal system in the U.S. offers them minimal protection.

This isn’t a hypothetical privacy concern. It’s happening right now, to tens of thousands of workers, at one of the largest tech companies on Earth. The question isn’t whether other companies will follow. It’s how fast.