
AI News: Grandmother Wrongfully Jailed Due to Facial Recognition Error
A Tennessee grandmother endured over five months of incarceration after an Artificial Intelligence (AI) facial recognition tool incorrectly linked her to crimes committed in North Dakota – a state she maintains she had never visited. This case highlights the growing concerns surrounding the accuracy and potential for misuse of AI in law enforcement.
The Case of Angela Lipps
Angela Lipps, 50, was initially arrested in Tennessee on July 14th, following the issuance of a warrant from Fargo, North Dakota, weeks prior. The warrant stemmed from a series of bank fraud incidents that had occurred in and around Fargo. Investigators utilized “our partner agency’s facial recognition technology” alongside other investigative methods to identify a suspect, according to Fargo Police Department Chief Dave Zibolski.
However, Chief Zibolski acknowledged that the reliance on information from a neighboring agency’s AI system was “part of the issue” in Lipps’ case. The West Fargo Police Department confirmed they employ Clearview AI, a controversial startup with a vast database of images scraped from the internet, including social media. Clearview AI identified Lipps as a potential match to a suspect using a fake ID in a separate West Fargo fraud case, and this information was shared with Fargo police.
Errors and Apologies
Fargo police have admitted to “a few errors” in the handling of the case and have pledged to revise their procedures, but have stopped short of a direct apology to Ms. Lipps. The department has since prohibited the use of the West Fargo’s AI system, stating they were unaware of its implementation and would not have authorized its use.
The delay in Lipps’ release was also scrutinized. It took over three months for Tennessee law enforcement to notify North Dakota authorities of her extradition waiver. Lipps described her extradition flight as a terrifying experience, being her first time on an airplane.
Exoneration and Aftermath
Fortunately, Lipps’ lawyer obtained bank records proving she was in Tennessee during the time of the alleged crimes. On December 23rd, the charges were dismissed, and she was released on Christmas Eve. Despite her exoneration, Lipps suffered significant trauma, loss of liberty, and reputational damage.
Her legal team is exploring potential civil rights claims, arguing that a proper investigation would have revealed the inaccuracies sooner. They criticize the department’s reliance on AI as a “shortcut” for basic investigative work.
Broader Implications of AI in Policing
Lipps’ case is not isolated. It mirrors a recent incident in Baltimore County, where a high school student was handcuffed and searched after an AI security system misidentified a bag of Doritos as a firearm. These incidents raise serious questions about the rapid integration of AI into policing without sufficient oversight or evidence of efficacy.
Ian Adams, a Criminology and Criminal Justice professor at the University of South Carolina, emphasizes that many AI-related errors in policing stem from human error, not just technological flaws. He warns against complacency and stresses the importance of human oversight in interpreting algorithmic results. As Brookings Institute points out, the use of facial recognition technology raises significant legal and ethical concerns.
Moving Forward
The Fargo Police Department is implementing changes, including working with state and federal authorities and requiring monthly reviews of facial recognition identifications. The State’s Attorney’s Office is also planning training on facial recognition technology. However, the case of Angela Lipps serves as a stark reminder of the potential consequences of unchecked AI implementation in law enforcement and the critical need for careful consideration, rigorous testing, and robust oversight.




