Apple’s March 4 product event, just three days away, is expected to preview at least five new products - and among them are the company’s first AI wearables with cameras. Smart glasses, camera-equipped AirPods, and an AI pendant are all reportedly in development, each designed to feed Apple’s Visual Intelligence system.
The timing is precarious. Meta’s Ray-Ban smart glasses have spent the past year demonstrating exactly what can go wrong when you put cameras on people’s faces and connect them to AI.
What Apple Is Building
According to Bloomberg, Apple is simultaneously developing three camera-equipped wearables:
Smart glasses targeting a 2027 launch, with production potentially starting in late 2026. Like Meta’s Ray-Bans, they’ll include cameras, microphones, and AI processing - but Apple is betting on tighter integration with Siri and Visual Intelligence to differentiate.
Camera-equipped AirPods that could arrive as early as late 2026. Bloomberg suggests these will feature lower-resolution cameras meant to help AI understand context rather than capture photos or video.
An AI pendant or pin worn around the neck, serving as another camera input for Siri. Think Humane AI Pin, but with Apple’s ecosystem backing it.
All three will connect to iPhone and interface with what Apple describes as a smarter version of Siri, one that can see and understand the world around you.
The Meta Cautionary Tale
Apple is entering a market that Meta has been poisoning for the past year. The Ray-Ban Meta glasses have become synonymous with consent violations and harassment.
Slate reports that Meta’s AI glasses are “wreaking havoc across school campuses,” with students and teachers reporting rising concerns about privacy and harassment. The SAT now bans smart glasses starting this month to prevent cheating via integrated cameras and AI assistants.
The problems extend beyond schools. 404 Media documented how content creators use Meta glasses to film strangers for social media without knowledge or consent. One woman discovered a conversation she barely remembered - in which she refused to give a stranger her number - had been captured, posted online, and mocked by thousands.
Two Harvard students demonstrated that Meta glasses footage could be connected to external facial recognition systems to identify strangers in public. And in London, Meta faces a lawsuit over public consent as the glasses roll out.
Perhaps most damning: the LED indicator meant to signal recording can be disabled. According to 404 Media, hobbyists openly offer modifications that disable the white LED while keeping the camera fully functional.
People Are Fighting Back
The backlash has spawned defensive technology. German researcher Yves Jeanrenaud created Nearby Glasses, an Android app that scans Bluetooth signals to detect when smart glasses are nearby.
The app watches for manufacturer IDs from Meta, Luxottica (which makes the Ray-Ban frames), and Snap. When detected, it sends a push notification warning you that someone nearby may be recording.
It’s an imperfect solution - false positives from VR headsets and fitness trackers are common - but the app’s existence signals how desperate people are for any defense against ambient surveillance.
Privacy advocates are already positioning this as the template Apple should follow. AppleInsider argues that detection apps represent “a great model for Apple Glass developers to follow,” and that Apple’s wearables should include visible, unmistakable recording indicators that can’t be disabled or obscured.
Apple’s Privacy Pitch
Apple will almost certainly lean hard on privacy as a differentiator. The company’s track record with Apple Intelligence emphasizes on-device processing: the Neural Engine handles most AI tasks locally, and when cloud processing is required, data routes through Private Cloud Compute - Apple’s infrastructure where data is processed in encrypted enclaves and never stored.
For Visual Intelligence specifically, Apple claims the system is “aware of your personal information without collecting your personal information.” Object identification and contextual understanding happen on-device.
This approach could address what you’re recording about yourself. But it does nothing for the people you’re recording.
The Fundamental Problem Remains
Apple can process everything on-device. It can encrypt data in transit. It can promise never to store images on its servers. None of that solves the core issue: putting cameras on people’s faces normalizes constant surveillance of everyone around them.
When Apple’s smart glasses ship, users won’t be scanning QR codes and translating menus. They’ll be filming coworkers, recording strangers on the subway, and capturing moments that the subjects never consented to.
The questions Apple needs to answer aren’t about encryption or on-device processing. They’re about consent:
- Will recording indicators be impossible to disable?
- Will the devices make distinctive sounds when capturing images or video?
- Will Apple build detection APIs so others can know when they’re being recorded?
- Will schools, restaurants, and venues be able to enforce no-recording zones?
Meta has answered all of these questions with “no.” Apple’s response - three days from now - will determine whether AI wearables can exist without becoming surveillance tools.
The Bottom Line
Apple entering the AI wearables market was inevitable. Whether it enters as a company that solves the consent problem or one that simply ignores it better than Meta - that’s the choice Tim Cook faces on March 4.