New Orleans employed AI surveillance without public knowledge or comprehensive oversight

New Orleans used unapproved AI for mass surveillance, sparking privacy concerns.

: For two years, New Orleans utilized more than 200 AI-powered surveillance cameras operated by Project NOLA, scanning faces and alerting law enforcement in real-time without public knowledge. The network relied on a database of 30,000 faces, allowing police to respond to alerts often within minutes, although concerns have been raised about potential Fourth Amendment violations. The technology, mostly made by Duhua despite U.S. bans, led to 34 arrests but bypassed legal requirements for police use, raising issues about oversight and civil liberties. In April, amid growing legal concerns, the police department paused automated alerts to re-evaluate legal compliance.

For the past two years, the city of New Orleans has been at the heart of a heated debate over AI surveillance. The city utilized over 200 AI-powered surveillance cameras to scan and identify faces in real-time. This operation, largely unknown to the public, was run by Project NOLA, a private nonprofit organization. Unlike conventional facial recognition used in emergencies and with evidence-related contexts, this program allowed for extensive surveillance and immediate alerts to law enforcement when a match was found. The cameras were installed in high-crime areas, notably the French Quarter, and raised significant concerns over privacy rights and due process.

The primary method involved partnering with a private nonprofit, Project NOLA, managed by Bryan Lagarde, a former police officer. These high-tech cameras, often operated by private businesses, fed video footage directly to a control room at the University of New Orleans, where advanced algorithms scanned the footage for potential matches against a database of 30,000 faces. Despite the network's decentralized operations, Project NOLA claimed ownership over the footage, ensuring it was only stored for 30 days before being deleted. However, this setup also allowed for retrospective tracking of individuals' movements, sparking significant Fourth Amendment legal concerns.

The operational framework blatantly disregarded a 2022 city ordinance that restricted facial recognition use to cases involving violent crimes, with stringent protocols requiring officers to log and review usage. This law was designed to ensure transparency and accountability through a state-run fusion center where expert examiners validated image matches. However, Project NOLA's more direct alerts bypassed these controls, leaving police departments with little documentation and increasing J. Robert Asher's apprehension from civil liberties advocates, who dubbed it a 'nightmare scenario' due to the lack of transparency.

A notable aspect of this case is the involvement of the Chinese electronics company, Duhua, which manufactured the technological components despite U.S. government bans. These components have been part of 34 arrests documented by early 2023, including nonviolent offenses like theft. The entire procedure often lacked mention of real-time tracking or facial recognition, adding further fuel to critics' arguments regarding the system's lack of oversight and public knowledge.

The resulting concerns and scrutiny prompted Police Superintendent Anne Kirkpatrick to halt automated alerts by April, as authorities assessed the program's legality concerning the city’s ordinance. Though Project NOLA's system still generates alerts, the information is now relayed to law enforcement through more traditional means like phone calls or emails, avoiding automatic notifications. This measures reflect an ongoing tension as technology further ambles into policing and the delicate balance between public safety and civil liberties.

Sources: TechSpot, The Washington Post, KSLA News