The Rise of Facial Recognition Technology and Its Impact on Communities of Color
Governments and private companies have a long history of collecting data from civilians, often justifying the resulting loss of privacy in the name of national security, economic stability, or other societal benefits. However, it is important to note that these trade-offs do not affect all individuals equally. In fact, surveillance and data collection have disproportionately affected communities of color under both past and current circumstances and political regimes.
From the historical surveillance of civil rights leaders by the Federal Bureau of Investigation (FBI) to the current misuse of facial recognition technologies, surveillance patterns often reflect existing societal biases and build upon harmful and virtuous cycles. Facial recognition and other surveillance technologies also enable more precise discrimination, especially as law enforcement agencies continue to make misinformed, predictive decisions around arrest and detainment that disproportionately impact marginalized populations.
The oversurveillance of communities of color dates back decades to the civil rights movement and beyond. During the 1950s and 1960s, the FBI tracked Martin Luther King, Jr., Malcolm X, and other civil rights activists through its Racial Matters and COINTELPRO programs, without clear guardrails to prevent the agency from collecting intimate details about home life and relationships that were unrelated to law enforcement.
More recently, the Black Lives Matter (BLM) movement, initially sparked in 2013 after the murder of 17-year-old Trayvon Martin by a local vigilante, has highlighted racial biases in policing that disproportionately lead to unwarranted deaths, improper arrests, and the excessive use of force against Black individuals. Over the years, the government’s response to public protests over egregious policing patterns has raised various concerns over the appropriate use of surveillance, especially when primarily focused on communities of color.
In 2015, the Baltimore Police Department reportedly used aerial surveillance, location tracking, and facial recognition to identify individuals who publicly protested the death of Freddie Gray. Similarly, after George Floyd was murdered in 2020, the U.S. Department of Homeland Security (DHS) deployed drones and helicopters to survey the subsequent protests in at least 15 cities.
The Private Sector’s Role in Enhancing Surveillance Capabilities
Facial recognition has become a commonplace tool for law enforcement officers at both the federal and municipal levels. Out of the approximately 42 federal agencies that employ law enforcement officers, the Government Accountability Office (GAO) discovered in 2021 that about 20, or half, used facial recognition.
On the procurement side, Clearview AI is one of the more prominent commercial providers of FRT to law enforcement agencies. Since 2017, it has scraped billions of publicly available images from websites like YouTube and Facebook, and enables customers to upload photos of individuals and automatically match them with other images and sources in the database. As of 2021, the private startup had partnered with over 3,100 federal and local law enforcement agencies to identify people outside the scope of government databases.
Another example is Vigilant Solutions, which captures image and location information of license plates from billions of cars parked outside homes, stores, and office buildings, and which had sold access to its databases to approximately 3,000 local law enforcement agencies as of 2016. Vigilant also markets various facial recognition products like FaceSearch to federal, state, and local law enforcement agencies.
A third company, ODIN Intelligence, partners with police departments and local government agencies to maintain a database of individuals experiencing homelessness, using facial recognition to identify them and search for sensitive personal information such as age, arrest history, temporary housing history, and known associates.
In response to privacy and ethical concerns, and after the protests over George Floyd’s murder in 2020, some technology companies, including Amazon, Microsoft, and IBM, pledged to either temporarily or permanently stop selling facial recognition technologies to law enforcement agencies. However, voluntary and highly selective corporate moratoriums are insufficient to protect privacy, since they do not stop government agencies from procuring facial recognition software from other private companies.
The Limitations of Facial Recognition Technology and Its Disproportionate Impact on Communities of Color
Mass surveillance affects all Americans through a wide suite of technologies—but facial recognition, which has become one of the most critical and commonly-used technologies, poses special risks of disparate impact for historically marginalized communities.
In December 2020, the New York Times reported that Nijeer Parks, Robert Williams, and Michael Oliver—all Black men—were wrongfully arrested due to erroneous matches by facial recognition programs. Recent studies demonstrate that these technical inaccuracies are systemic: in February 2018, MIT and then-Microsoft researchers Joy Buolamwini and Timnit Gebru published an analysis of three commercial algorithms developed by Microsoft, Face++, and IBM, finding that images of women with darker skin had misclassification rates of 20.8%-34.7%, compared to error rates of 0.0%-0.8% for men with lighter skin.
Buolamwini and Gebru also discovered bias in training datasets: 53.6%, 79.6%, and 86.2% of the images in the Adience, IJB-A, and PBB datasets respectively contained lighter-skinned individuals. In December 2019, the National Institute of Standards and Technology (NIST) published a study of 189 commercial facial recognition programs, finding that algorithms developed in the United States were significantly more likely to return false positives or negatives for Black, Asian, and Native American individuals compared to white individuals.
When disparate accuracy rates in facial recognition technology intersect with the effects of bias in certain policing practices, Black and other people of color are at greater risk of misidentification for a crime that they have no affiliation with.
Balancing Privacy and Public Safety: The Need for Comprehensive Legislation
The U.S. government has long acknowledged that surveillance cannot be unlimited. There must be some safeguards to prevent any privacy abuses by the government or private entities, as a matter of fundamental rights. To that end, federal, state, and local governments have enshrined privacy values into law—in certain contexts—through layers of constitutional principles, limited statutes, and court cases.
However, new technology significantly shifts the traditional balance between surveillance and civil liberties, and the existing patchwork of laws may not be enough to prevent the risks stemming from facial recognition and other technologies. As such, it is necessary to take stock of existing privacy safeguards and identify areas of improvement.
At the federal level, the Electronic Communications Privacy Act (ECPA) and various executive orders provide some protections against government surveillance. However, these statutes contain provisions that allow law enforcement to access emails and customer records without a warrant in certain contexts.
Over seven states and 20 municipalities, such as Boston, San Francisco, and Virginia, have established some limitations on government use of facial recognition usage in certain contexts. For instance, Maine enacted a law in 2021 that generally prohibits government use of facial recognition except in certain cases (e.g., “serious” crimes, identification of missing or deceased individuals, and fraud prevention).
Yet, state and local regulations lack uniformity throughout the country, and the majority of municipalities do not have specific legal restrictions on government use of facial recognition. Additionally, in the absence of a nationwide comprehensive data privacy law, many companies face few legal limitations on how they collect, process, and transfer personal information—allowing Clearview and other companies to gather data from millions of people without clear controls to access or delete their images, and with few safeguards for security, algorithmic bias, and transparency.
The Path Forward: Strengthening Privacy Protections and Ensuring Algorithmic Accountability
To reduce the potential for emerging technologies to replicate historical biases in law enforcement, we must address both the public and private sector’s role in enhancing surveillance capabilities.
At the federal level, the executive branch can take steps to evaluate its use of artificial intelligence and equitable distribution of public services, including heightened scrutiny over facial recognition programs and relationships with geolocation data brokers. Legislators have also introduced several bills that propose new guardrails for executive agencies that conduct surveillance, such as prohibiting federal law enforcement officers from deploying facial recognition in their body cameras or patrol vehicle cameras.
However, federal law alone is insufficient, as state and local governments have jurisdiction over policing in their areas. As such, more state and local governments and police departments should consider measures to specify the contexts in which it is appropriate to use facial recognition and the necessary processes to do so (e.g., with a probable cause warrant).
Crucially, Congress needs to pass a comprehensive federal privacy law that regulates the data practices of private companies. Such legislation could introduce requirements for businesses to allow individuals to access and delete personal information, limit data collection and retention, and mandate audits for algorithmic bias and disparate impact. By governing the private sector’s data practices, federal privacy law would have indirect yet significant impacts on government surveillance capabilities.
Ultimately, balancing privacy and public safety requires a multipronged approach. Strengthening transparency, regulation, audit, and explanation of facial recognition use and application in individual contexts is essential to ensuring accountability and mitigating the disproportionate impact on marginalized communities. As technology continues to evolve, we must remain vigilant in protecting civil liberties while also addressing legitimate security concerns.
Conclusion
The rise of facial recognition technology has significantly shifted the traditional balance between surveillance and civil liberties. While governments and law enforcement agencies have justified the use of these technologies in the name of public safety, the disproportionate impact on communities of color cannot be ignored.
Addressing this complex issue requires a comprehensive approach that strengthens privacy protections, ensures algorithmic accountability, and enhances transparency in the development and deployment of facial recognition systems. By pursuing a combination of federal, state, and local legislation, as well as robust impact assessments and auditing processes, we can work towards a future where the benefits of technology are equitably distributed and the civil liberties of all citizens are protected.
As an IT professional, I urge policymakers, technology companies, and law enforcement agencies to prioritize these critical issues and collaborate with civil society to find solutions that balance security and privacy. Only through such a holistic and inclusive approach can we ensure that the rise of facial recognition technology does not come at the expense of the fundamental rights and freedoms of marginalized communities.