Angela Lipps is a 50-year-old grandmother from Elizabethton, Tennessee. On July 14, 2025, U.S. Marshals showed up at her home while she was babysitting. They arrested her on four counts of unauthorized use of personal identifying information and four counts of theft — crimes committed in Fargo, North Dakota, a state she had never visited, more than 1,200 miles away.
She spent 168 days in jail. The technology that put her there was Clearview AI.
How It Happened
Between April and May 2025, someone walked into banks in Fargo using a fake U.S. Army military ID to make fraudulent withdrawals totaling tens of thousands of dollars. West Fargo Police — the department that actually operated the facial recognition system, since Fargo PD didn’t have the technology — ran images from the fake ID through Clearview AI’s database.
The system flagged Lipps as a “potential suspect.” Detectives then reviewed her social media profiles and Tennessee driver’s license photo, decided she matched based on “facial features, body type, and hair characteristics,” and filed charges.
No one checked whether Lipps had actually been in North Dakota. No one pulled bank records that would have shown her conducting transactions in Tennessee during the exact dates of the crimes. No one compared her against the actual bank surveillance footage — the images from the fake ID were the only basis for identification.
108 Days Before Anyone Noticed
The procedural failures compound the technological ones. After her arrest in Tennessee, Lipps was held for 108 days as a “fugitive from justice” before being transferred to North Dakota. During that time, a public defender was assigned in Tennessee but apparently took no meaningful action on her case.
When she finally arrived at Cass County Jail in North Dakota on October 30, Fargo Police claim they weren’t notified she was in custody until December 5 — despite email evidence suggesting otherwise.
It took a North Dakota attorney minutes to pull the bank records proving Lipps was in Tennessee when the crimes occurred. She was released on Christmas Eve 2025, charges dismissed.
By then she had lost her rental home, her car, her possessions, and her dog.
The Pattern
Lipps isn’t an isolated case. In Reno, Nevada, Jason Killinger spent 11 hours in jail — four of them handcuffed — after a casino’s facial recognition system flagged him as a trespasser named Michael Ellis. Killinger had a valid Nevada driver’s license, a UPS pay stub, and vehicle registration, all proving who he was. On January 22, 2026, the arresting officer admitted under oath that the arrest “never should have happened.” The lawsuit alleges the officer “knowingly inserted false statements” into police reports claiming Killinger’s IDs were fraudulent.
In New York, Trevis Williams was arrested and jailed for two days despite not matching the physical description given by the victim of a sex crime. Cell phone location data showed Williams was miles away at the time of the offense, driving from Connecticut to Brooklyn while the crime occurred in Manhattan.
In the UK, a man was arrested for burglary based on facial recognition despite being 100 miles from the crime scene.
Every one of these cases follows the same pattern: the algorithm produces a match, officers treat it as identification rather than a lead, and basic investigative steps that would have cleared the suspect are skipped entirely.
Why It Keeps Happening
The problem isn’t just that facial recognition makes errors — any technology does. The problem is the institutional response to those errors.
Police departments using facial recognition systems are not required to disclose that fact to defendants in most jurisdictions. There is no federal standard for how a facial recognition “match” should be treated in an investigation. Officers are receiving algorithmic output that carries the false authority of a machine-generated conclusion, and they’re treating it as probable cause.
Fargo Police Chief Dave Zibolski issued a public apology on March 27, 2026 — eight months after Lipps was arrested, three months after charges were dropped. The apology didn’t address whether the department has changed its policies. Lipps’ attorneys, Eric Rice and Dane DeKrey, are in the pre-litigation investigation phase.
Meanwhile, Clearview AI remains in active use by law enforcement agencies across the country. The company has faced lawsuits and regulatory action in multiple countries — banned in Canada, fined in the UK, France, Italy, Greece, and Australia — but continues to operate in the United States with minimal federal oversight.
What’s Being Done (And Why It’s Not Enough)
A handful of cities have banned government use of facial recognition — San Francisco, Boston, Minneapolis, and others. But these are local ordinances, easily circumvented by state or federal agencies. No federal legislation regulating law enforcement use of facial recognition has passed.
The AI Safety Report 2026, authored by over 100 experts and led by Turing Award winner Yoshua Bengio, flagged the reliability problems with AI systems in deployment. But the report’s focus is on frontier models and autonomous agents — not the grinding, everyday harm caused by deployed AI systems that are already making consequential decisions about people’s freedom.
Angela Lipps lost half a year of her life because a computer said she looked like someone else, and no human in the chain — not the detective, not the public defender, not the U.S. Marshals — did the basic work of checking whether the computer was right. The technology failed, but the real failure was human: a system designed to treat algorithmic output as truth.
The next time someone tells you AI safety is about hypothetical superintelligence, remember that a grandmother in Tennessee is still looking for a new place to live.