AI Facial Recognition Software Resulting In False Arrests

Getting your Trinity Audio player ready...
Please Share This Story!
Technocrat-influenced Police departments routinely hide the fact that they are using AI software to collar suspects, but too many innocent people are being wrongly charged. Part of the problem is that law enforcement management doesn’t understand AI in the first place, which encourages a culture that encourages sloppy police work and an over-reliance on AI to do their work for them. Pre-crime and facial recognition are the bane of law enforcement.  ⁃ Patrick Wood, Editor.

“Orwell is here, and he’s living large, man!” 

Police nationwide are misusing facial recognition software, relying on it to arrest suspects without additional evidence, according to a new investigation by the Washington Post.

Most departments aren’t required to disclose or document its use. Among 23 departments with available records, 15 across 12 states arrested suspects based solely on AI matches, often violating internal policies requiring corroboration.

One report called an unverified AI match a “100% match,” while another claimed the technology “unquestionably” identified a suspect. At least eight people have been wrongfully arrested in the U.S. due to AI matches, two of which were previously unreported.

All cases were dismissed, but basic police work—such as checking alibis or comparing physical evidence—could have prevented these arrests. The true scale of AI-fueled false arrests remains unknown, as most departments lack disclosure requirements and rarely reveal AI use.

The Post identified 75 departments using facial recognition, with records from 40 showing arrests tied to AI matches. Of these, 23 provided sufficient detail, revealing that nearly two-thirds made arrests without corroborating evidence. Departments often refused to discuss their practices or claimed officers relied on visual judgment to confirm matches.

In Florence, Kentucky, police used uncorroborated AI matches in at least four cases, with mixed outcomes. Local prosecutor Louis Kelly defended officers’ judgment in identifying suspects, including those flagged by AI.

For its report, the Washington Post reviewed facial recognition use by 75 police departments, with detailed records from 23. It found 15 departments, including Austin, Detroit, and Miami, made arrests based solely on AI matches without independent evidence.

Some lacked records or transparency, while others relied on questionable practices like showing AI-identified photos to witnesses. Interviews clarified some cases, but reliance on uncorroborated AI remains widespread.

You can read the full investigation here.

Read full story here…

About the Editor

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments