By: Phoebe Varunok

Two faces and tech print overlapping.

            The essence of George Orwell’s 1984 is felt throughout the streets of the Southwestern Chinese city of Guiyang.  The Guiyang authorities have a vast network of high-tech cameras blanketing the city that can identify anyone who steps out in public.[1]  Almost instantly, these cameras can match the face of a Guiyang citizen to their car, address, phone number, and relatives.[2]  The advent of this new technology is slowing seeping into modern policing throughout the United States.  Currently, Chicago and Detroit have vast camera monitoring systems used by the police.[3]  Face surveillance pilot programs have also begun in Orlando, Washington, D.C., and New York City.[4]  While this technology arguably may assist police, it will ultimately lead to a plethora of Fourth Amendment issues and have an increased disparate impact on minorities already subject to inequitable modern policing practices.

            Advocacy for the use of facial surveillance and facial recognition technology (FRT) is fervent amongst law enforcement.[5]  The pervasive argument for pro-surveillance is that it will lead to a drastic reduction in crime and will allow police to stop crimes in progress or prevent crimes from occurring.[6]  The dangers, among many, by allowing these technologies to be used in policing practices is that they have yet to be subject to any sort of regulation or oversight.[7]  Additionally, the majority of the private companies creating the technologies that are used by police have an alarming number of error rates specifically when identifying minorities.[8]

            The National Institute of Standards and Technology (NIST) and the Department of Commerce recently published a study testing 189 face-recognition algorithms and how those algorithms fared at accurately identifying different demographics.[9]  This study was supposedly conducted to vindicate FRT but it actually magnified the racial bias present in FRT.  Among the striking findings of the NIST study, many of the algorithms were 100 times more likely to misidentify people who are Black, Asian, and Native American compared to Caucasian individuals.  Native Americans and Black women suffer the highest rates of false positives in one-to-one matching algorithms.  Alarmingly, Black women were more likely than any other group tested to be falsely identified in a database containing mugshots kept by law enforcement. 

            The results from this study are troubling but the implications of the U.S. using this technology in airports, at the border, and in regular policing is far more disconcerting.  Law enforcement already has great discretion in deciding who they stop and search under the ever-expanding “reasonable suspicion” and “probable cause” standards.[10]  Additionally, the low threshold of reasonable suspicion is all that is required of border enforcement to conduct non-routine invasive searches.[11] 

            Coupling faulty, racially biased facial recognition software with the probable cause and reasonable suspicion standard will inevitably lead to countless false arrests and Fourth Amendment violations.  The following scenario could likely occur if FRT is used in everyday policing: a police officer is scanning through CCTV footage during the course of investigation of a robbery at a convenience store.  The officer pauses the video and zooms in on a suspect’s face and sees that the suspect is of Asian descent.  Taking the suspect’s image from the video, the officer runs it through the precinct’s facial recognition program to see if this person’s mugshot is in the database.  The FRT program finds a match in the database along with a name, picture, address, and rap sheet. 

The officer, fully trusting the accuracy of the technology, takes the results to a magistrate believing he has met the standard of probable cause required to get a warrant to search the “suspect’s” home and arrest him.  After a warrant is granted, a search is conducted, an arrest is made, and the “suspect” is in police custody.  Meanwhile, another robbery occurs at a different convenience store and the same man from the original CCTV video is on the second CCTV video recovered from the second crime scene.  The police made a false arrest by relying on a false positive rendered from the facial recognition software.  The man who was arrested is suing the police department for violating his Fourth Amendment rights while the actual suspect from the set of robberies has yet to be identified. 

The aforementioned scenario is just one of many that may occur if any one of the almost 200 FRT algorithms are used by law enforcement in everyday policing.  Prior to police departments implementing this kind of software in their regular investigative work, the algorithms used to make this technology possible need to be subject to oversight, regulation, and regular auditing to ensure that racial bias, already pervasive in policing, does not increase.  Without safeguards in place for facial recognition policing technology, the courts will potentially be overwhelmed with Fourth Amendment violations, and cases of police targeting minorities may skyrocket.


[1] See Tara Francis Chan, One Chinese City is Using Facial-Recognition That Can Help Police Detect and Arrest Criminals in as Little as 2 Minutes, Business Insider (Mar. 19, 2019, 9:56PM) https://www.businessinsider.com/china-guiyang-using-facial-recognition-to-arrest-criminals-2018-3.

[2] See id.

[3] See id.

[4] See Clare Garvie and Laura M. Moy, America Under Watch: Face Surveillance in the United States, Georgetown Law: Center on Privacy & Technology (May 16, 2019) https://www.americaunderwatch.com/.

[5] See Andrew Guthrie Ferguson, Illuminating Black Data Policing, 15 Ohio St. J. Crim. L. 503, 503 (2017-2018).  See also James O’Neill, How Facial Recognition Makes You Safer, New York Times (June 9, 2019) https://www.nytimes.com/2019/06/09/opinion/facial-recognition-police-new-york-city.html (former NYPD Police Chief describing the efficacy of FRT in NYPD investigations leading to 998 arrests in 2018 alone).

[6] See O’Neill, How Facial Recognition Makes You Safer.

[7] See Asa Fitch, Facial-Recognition Software Suffers From Racial Bias, U.S. Study Finds, Wall Street Journal (Dec. 19, 2019, 9:01pm) https://www.wsj.com/articles/facial-recognition-software-suffers-from-racial-bias-u-s-study-finds-11576807304.

[8] See id.

[9] See generally Patrick Grother, Kayee Hanaoka, and Mei Ngan, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects,” National Institute of Standards and Technology (Dec. 2019) https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.

[10] See Illinois v. Gates, 462 U.S. 213, 230-231 (1983) (expanding the definition probable cause to include totality of the circumstances ); Terry v. Ohio, 392 U.S. 1 (1968) (creating the reasonable suspicion standard which allows police to stop a person if there are specific and articulable facts, which taken together with rational inferences from those facts, reasonably warrant the belief that criminal activity is afoot).

[11] See United States v. Montoya-Hernandez, 473 U.S. 531, (1985).

Posted in

Share this post