Immigration and Customs Enforcement officers have used a facial recognition app called Mobile Fortify more than 100,000 times since June 2025 - scanning people’s faces on the street, at workplaces, during traffic stops, and at protests. Internal records obtained by WIRED show that DHS knew the app couldn’t actually verify anyone’s identity. They deployed it anyway, without completing the legally required privacy reviews.
This isn’t hypothetical surveillance. It’s happening now, on American streets, to American citizens, using AI tools the government’s own scientists say don’t work reliably outside controlled conditions.
The App That Can’t Do What They Claim
Mobile Fortify, built by NEC, lets ICE and CBP agents take a photo of someone with a government-issued phone and run it against a database of 1.2 billion face photos. DHS publicly framed the tool as capable of identifying people and verifying their immigration status. But internal documents tell a different story: the app doesn’t actually “verify” identities. It matches faces against a database and returns possible matches - a critical distinction when the result might determine whether someone gets detained.
Testing by federal scientists at the National Institute of Standards and Technology has consistently shown that face-recognition accuracy drops sharply when images are taken outside controlled settings - including for top-performing NEC models. Street-level photos taken by agents in varying lighting, at odd angles, with uncooperative subjects are exactly the conditions where these systems fail most.
Neither ICE nor CBP completed the legally required Privacy Impact Assessments before putting Mobile Fortify in agents’ hands. These assessments aren’t optional paperwork - they’re legal requirements designed to evaluate how surveillance tools affect civil liberties before deployment, not after 100,000 uses.
How the Guardrails Were Removed
Records reviewed by WIRED reveal that DHS’s hasty approval of Mobile Fortify last May was made possible by dismantling the department’s centralized privacy review process and quietly removing department-wide limits on facial recognition use. These changes were overseen by a former Heritage Foundation lawyer and Project 2025 contributor who now serves in a senior DHS privacy role - the very person whose job is supposed to protect civil liberties.
This is the bureaucratic version of removing the smoke detectors before lighting a fire. The oversight mechanisms that existed specifically to prevent this kind of unchecked surveillance deployment were gutted by someone with an ideological interest in expanding government surveillance powers.
The $9 Million Clearview AI Deal
Mobile Fortify is just one piece. ICE awarded Clearview AI a contract worth up to $9.225 million in September 2025, with $3.75 million paid upfront. Clearview’s system searches a database of billions of images scraped from social media platforms and public websites - photos that people posted of themselves, their families, and their friends, never intending them to become part of a government surveillance apparatus.
The official justification? Identifying victims and offenders in child sexual exploitation cases and assaults against law enforcement. The actual use, according to 404 Media, leans heavily toward identifying people at protests and immigration enforcement operations. In multiple cases where ICE claimed people had “assaulted” officers, charges were later dropped or reduced, or investigations revealed that officers themselves had used excessive force.
Palantir Ties It All Together
These individual tools feed into something larger. ICE awarded Palantir a $30 million contract to build ImmigrationOS, an AI-powered platform that fuses immigration records, travel data, Social Security files, commercial data broker feeds, and social media activity into continuously updated profiles. It’s the connective tissue that turns individual surveillance tools into a comprehensive tracking system.
Combined with the geolocation tracking tool Webloc - which lets agents draw a “geofence” around any area and track every phone within it - ICE now has the ability to identify who someone is (facial recognition), where they’ve been (Palantir), where they are right now (geolocation), and what they’ve said online (social media monitoring). All of it operating largely without the privacy reviews, judicial oversight, or operational limits that are supposed to prevent exactly this kind of mass surveillance.
It’s Not Just About Immigration
A DHS memo obtained by the Washington Post instructed agents dispatched to Minneapolis to “capture all images, license plates, identifications and general information” on “agitators, protesters, etc.” This isn’t immigration enforcement. This is political surveillance using tools built for border security.
Internal footage obtained by 404 Media showed ICE officers using facial recognition apps to check the citizenship status of teenagers who weren’t carrying IDs. U.S. citizens have reported having their faces scanned in real time by ICE or CBP agents during routine encounters.
The American Immigration Council calls it what it is: mission creep. AI tools justified by immigration enforcement are being turned inward, used to track, identify, and monitor Americans exercising their constitutional rights.
Sen. Ed Markey has introduced legislation to ban DHS and ICE from using facial recognition and biometric surveillance inside the United States. Whether it passes is another matter.
What This Means
We’ve crossed into territory where a federal agency can point a phone at your face on the street and, within seconds, pull up your name, immigration status, social media accounts, travel history, and known associates - using tools it deployed without legal review, powered by databases it built without your consent, and operated by agents who’ve been told the technology works better than it actually does.
The face-scanning app doesn’t reliably verify identities. The Clearview AI contract is justified by claims that don’t hold up under scrutiny. The privacy guardrails were deliberately dismantled by someone whose previous job was writing policy recommendations for expanding government power. And the data flows into a Palantir system designed to make all of this information instantly actionable.
This is what AI surveillance looks like when oversight is treated as an obstacle rather than a requirement. Not some dystopian future scenario - a documented present reality, deployed at scale, funded by taxpayer dollars, and pointed at the people it’s supposed to serve.
What You Can Do
- Know your rights: You are not required to consent to facial scans by ICE or CBP during encounters within the U.S. interior. The ACLU maintains updated guidance on rights during immigration encounters.
- Contact your representatives: Sen. Markey’s proposed legislation needs co-sponsors. Ask your senators and representatives where they stand on domestic biometric surveillance.
- Limit your facial data exposure: Review privacy settings on social media accounts. Clearview AI’s database is built from publicly available photos.
- Support oversight organizations: Groups like the American Immigration Council, the ACLU, and the Electronic Frontier Foundation are actively tracking and challenging these programs.
- Use privacy tools: Consider browser extensions that strip metadata from uploaded images and restrict social media platforms from making your photos publicly scrapable.