AI News: Meta Turns Employees Into Training Data, States Fight Back With Chatbot Laws

Meta installs keystroke tracking on employee PCs, Washington and Oregon pass chatbot safety laws, Nebraska suspends first attorney over AI hallucinations.

Top Stories

Meta Installs Keystroke Tracking on Employee Computers to Train AI

Meta is rolling out new surveillance software on U.S. employees’ work computers that captures mouse movements, clicks, keystrokes, and periodic screen snapshots. The tool, called Model Capability Initiative (MCI), is designed to generate training data for AI agents that can perform work tasks autonomously — things like navigating dropdown menus and using keyboard shortcuts that current models handle poorly.

Employees cannot opt out. Internal memos obtained by Reuters say the data won’t be used for performance evaluations, only AI training. That assurance landed with the same energy you’d expect from a company that just fired 8,000 people while doubling its AI infrastructure spending.

Employee reactions ranged from questions about opting out (you can’t) to describing the program as “unethical.” European employees are reportedly excluded, likely because EU and national labor laws place hard limits on electronic workplace monitoring. In the U.S., there is no federal law restricting this kind of employee surveillance.

The timing is worth noting. Meta announced the keystroke tracking program the same week it told 8,000 workers they’d lose their jobs to fund AI. The company is simultaneously laying off the humans and harvesting their behavioral data to train the systems meant to replace them.

Sources: Fortune · TechCrunch · The Register · Gizmodo

Washington and Oregon Enact AI Companion Chatbot Safety Laws

Washington (HB 2225) and Oregon (SB 1546) have signed into law new regulations targeting AI companion chatbots, joining California in creating a West Coast regulatory framework. Both laws take effect January 1, 2027.

The laws require chatbot operators to clearly disclose that users are talking to an AI, with reminders every three hours for adults and every hour for minors. Operators must implement crisis intervention protocols — specifically, systems to detect expressions of suicidal ideation or self-harm and provide referrals to crisis resources. Both states also ban specific manipulative engagement techniques: encouraging minors to withhold information from parents, simulating emotional distress when users try to end conversations, and soliciting gifts or purchases framed as necessary to maintain the “relationship.”

Oregon and Washington go further than California’s law in several areas, including content restrictions and engagement design rules. Idaho has passed its own version through the Senate. With 78 chatbot safety bills now active across 27 states, this is quickly becoming the most legislated area of AI regulation at the state level.

Sources: Morgan Lewis · Mayer Brown · Future of Privacy Forum

Nebraska Attorney Becomes First Lawyer Suspended Over AI Hallucinations

The Nebraska Supreme Court has suspended Omaha attorney Greg Lake after his appellate brief in a divorce case contained 57 defective citations out of 63 total — including 20 completely fabricated case references and four entirely invented cases that don’t exist in any jurisdiction.

Justices interrupted oral arguments just 37 seconds in. Lake initially claimed he’d uploaded the wrong version of the brief while traveling with a broken computer on his wedding anniversary, before admitting to using AI to generate the brief.

This is believed to be the first bar discipline action to suspend an attorney’s practice entirely over AI-related filing errors in the U.S. — a significant escalation from the financial penalties that courts have imposed so far. U.S. courts have levied at least $145,000 in sanctions against attorneys for AI citation errors in Q1 2026 alone. Researcher Damien Charlotin now tracks more than 1,200 AI hallucination cases in legal proceedings globally, roughly 800 from U.S. courts.

The irony is notable: a 2026 survey found that 61% of federal judges use AI themselves. The gap between judicial AI use and the punishment attorneys face for the same tools highlights an accountability problem that neither courts nor bar associations have resolved.

Sources: WOWT · The Geek in Review · The Ethics Reporter

Quick Hits

  • Virginia bans sale of geolocation data. Governor Abigail Spanberger signed S.B. 388, barring the sale of location data within a 1,750-foot radius — enough to prevent pinpointing where people live, work, or worship. Virginia joins Maryland and Oregon, with California, Connecticut, Massachusetts, and Vermont considering similar bans. Takes effect July 1. The Record · Hunton

  • State AI legislation hits a wall of 600+ bills. State lawmakers have introduced over 600 AI bills this year, with the number of enacted AI laws jumping from 6 to 25 since mid-March. Utah alone signed 9 new AI bills, 8 of them in the past two weeks. The White House’s National Policy Framework, meanwhile, is pushing for federal preemption of state laws that “impose undue burdens” — setting up a collision between state-level action and federal override. Plural Policy · Cooley

  • New COPPA rule amendments now enforceable. Updated children’s online privacy protections took effect April 22, adding new requirements for notice, consent, data minimization, and transparency as children encounter AI in chatbots, connected toys, tutoring tools, and companion apps. Blank Rome

  • DOJ AI Litigation Task Force targets state AI laws. The Department of Justice established the task force in January with the “sole responsibility” of challenging state AI laws deemed to unconstitutionally regulate interstate commerce or conflict with federal regulations. The first lawsuit targeted Colorado’s algorithmic discrimination law on behalf of xAI. Eversheds Sutherland

Worth Watching

The surveillance-to-layoff pipeline at Meta is now a closed loop. First, Meta cuts 8,000 workers and freezes 6,000 open roles. Then it installs software on remaining employees’ computers to capture their every click and keystroke, explicitly to train AI that can do work tasks autonomously. The company is simultaneously reducing its human workforce and mining the remaining workers for the behavioral data it needs to reduce them further. No opt-out, no European deployment (EU labor law wouldn’t allow it), and a promise that the data won’t be used for performance reviews — a promise that’s only meaningful until it isn’t. This is the first major tech company to openly describe this specific cycle: cut humans, record the survivors, train the replacement. Others are doing pieces of it. Meta is just saying the quiet part loud.