Workers in Kenya Are Watching Your Smart Glasses Footage - Including Sex and Bathroom Visits

A Swedish investigation reveals Meta's AI glasses send intimate user footage to human reviewers in Nairobi, triggering lawsuits and regulatory investigations across two continents.

Surveillance camera mounted on wall, symbolizing privacy concerns

When you bought Meta’s Ray-Ban smart glasses, the marketing promised they were “designed for privacy, controlled by you.” What the marketing didn’t mention: workers in Nairobi, Kenya would be watching footage of you in the bathroom, undressing, and having sex.

A joint investigation by Swedish newspapers Svenska Dagbladet and Goteborgs-Posten has exposed what may be the largest privacy scandal in wearable technology history. The footage captured by over 7 million pairs of Meta smart glasses doesn’t just flow to AI systems. It flows to human reviewers who have seen everything their customers do while wearing them.

What the Investigation Found

The Swedish journalists spoke to more than 30 data annotators at Sama, a San Francisco-based company with operations in Nairobi. These workers are responsible for training Meta’s AI systems by labeling, describing, and categorizing objects in images and videos from the glasses.

What they described seeing was deeply disturbing:

  • Bathroom visits
  • People undressing
  • Sexual encounters
  • Pornography viewed while wearing the glasses
  • Bank cards visible in recordings
  • Other intimate moments users never imagined would be seen by strangers

The anonymization system Meta claims protects user privacy doesn’t always work. Workers reported that faces sometimes remain visible in the material they review. When you combine identifiable faces with intimate footage, you have a privacy catastrophe.

How Did This Happen?

Meta’s Ray-Ban glasses feature “Live AI” - a conversational assistant that processes audio and video to respond to user queries. When you say “Hey Meta” and ask the glasses to identify something or help with a task, the system processes what your cameras see and your microphones hear.

But that processing isn’t entirely automated. The footage gets fed into what Meta calls a “data pipeline” that includes human review. And users can’t opt out.

Meta’s response to the Swedish journalists was telling. After two months of silence, a spokesperson simply referred them to the company’s terms of service and privacy policy, stating that “when Live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.”

Those terms apparently include the possibility that workers in Kenya will see you naked.

The Scale of Exposure

Meta and EssilorLuxottica sold over 7 million pairs of smart glasses in 2025 alone, more than tripling sales from the previous year. The companies have announced plans to increase production capacity to 10 million units annually by the end of 2026, with discussions about reaching 20 million.

Each of those glasses is a potential surveillance device. Not surveillance by governments or hackers, but by the company that sold them to you as a privacy-respecting product.

On March 4, 2026, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California filed a class action lawsuit in the U.S. District Court for the Northern District of California. Represented by the Clarkson Law Firm, they allege Meta violated privacy laws and engaged in false advertising.

The core claim: Meta advertised these glasses with promises like “designed for privacy, controlled by you” and “built for your privacy,” while covertly exposing users’ most intimate moments and personal data to human contractors without adequate disclosure.

The lawsuit is seeking compensation for what could be millions of affected users.

Regulatory Investigations

The scandal has triggered regulatory responses on both sides of the Atlantic:

Sweden: The Privacy Protection Authority (IMY) called Meta to a meeting regarding data processing of Meta AI glasses handled outside EU jurisdiction. Swedish privacy expert Petra Wierup made clear that GDPR protections must extend to subcontractors in third countries and cannot be weakened.

United Kingdom: The Information Commissioner’s Office (ICO) confirmed it is contacting Meta following the investigation. The UK regulator has signaled concerns about how the company handles sensitive user data.

European Union: Privacy experts have expressed concern that the glasses could violate GDPR provisions on consent and purpose limitation. When you agree to use AI features, you’re not meaningfully consenting to have strangers watch your most private moments.

The Hidden Human Cost

There’s another layer to this scandal: the workers themselves.

Sama has a troubled history. Between 2023-2024, Kenyan courts heard a landmark case brought by former content moderators against Meta and Sama over working conditions. In 2019, one worker was fired for organizing a strike over poor working conditions and pay.

A psychological assessment of 144 content moderators who worked for Meta through Sama showed severe post-traumatic stress disorder - the same condition that affects combat veterans and first responders. Workers described exposure to violent and disturbing content as “torture.”

Now these same workers are being exposed to intimate footage of Meta’s customers, raising questions about both user privacy and worker welfare.

What This Means for AI Wearables

This scandal exposes a fundamental tension in AI wearable technology. To make AI assistants useful, they need to see and hear what you experience. But training those AI systems requires human review of the data they collect.

The result: always-on AI devices become always-on surveillance devices, not for governments or hackers, but for the companies that build them.

Meta isn’t unique here. Any AI wearable that processes audio or video for training purposes faces similar issues. The difference is that Meta sold 7 million of these devices while marketing them as privacy-respecting, then sent the footage to contractors in another country with minimal safeguards.

What You Should Do

If you own Meta Ray-Ban glasses:

Understand what you’re consenting to. When you use Live AI features, your footage may be reviewed by humans. The terms of service that nobody reads apparently allow this.

Be conscious of when you’re wearing them. The glasses record when you activate AI features. If you’re doing anything private, take them off.

Consider the lawsuit. If you’ve used these glasses in private settings, you may be affected by the class action.

Watch for regulatory developments. GDPR violations could force Meta to change how it handles European users’ data, which might improve practices globally.

The Bottom Line

Meta built a surveillance device, sold 7 million of them as privacy-respecting glasses, and routed the footage to workers in Kenya who saw customers naked, having sex, and using the bathroom. The company’s response was to point to its terms of service.

This isn’t a bug. It’s how the AI sausage gets made. And it’s happening with devices marketed as “designed for privacy.”