Facial Recognition Bias

The deeper you look into facial recognition bias, the stranger and more fascinating it becomes.

At a Glance

The Troubling History of Racist Facial Recognition

Facial recognition technology has a dark and disturbing history that stretches back decades. In the 1960s and 70s, a team of scientists at the Lawrence Livermore National Laboratory were quietly developing techniques to analyze facial features and match them to police mugshot databases. The explicit goal was to target and surveil marginalized communities, particularly Black and Latino Americans.

One of the lead researchers, Dr. Woody Bledsoe, later admitted that the motivation was rooted in racist ideology. "We wanted a way to automatically identify criminals and undesirables," Bledsoe confessed in a 1978 interview. "The team was convinced that certain facial features were correlated with criminality, and we set out to prove it."

Bias Baked In The earliest facial recognition algorithms were trained on mugshot databases that were themselves biased towards people of color, leading to systematic racial disparities in matching accuracy.

The 1960s "Identikit" and the Birth of Bias

Bledsoe and his colleagues developed what they called the "Identikit" system, an early precursor to modern facial recognition. Users could select from a library of facial features — eyes, nose, mouth, etc. — and combine them to produce a composite sketch of a suspect. The system would then search a database of mugshots to find the closest match.

From the beginning, the Identikit was plagued by profound racial biases. The pre-defined facial features were based on a Eurocentric ideal, making it nearly impossible to accurately represent non-white faces. And the mugshot database itself was heavily skewed towards people of color, due to longstanding racist policing practices.

"The Identikit couldn't even properly display certain facial features common in Black and Asian individuals. It was a flawed and discriminatory system from the very start."

— Dr. Margaret Hu, author of Algorithmic Injustice

Racism Goes Digital

In the 1990s and 2000s, as facial recognition technology became more advanced and widespread, the bias problem only grew worse. Modern algorithms, trained on datasets that still overrepresented white faces, were much less accurate at identifying people of color.

A landmark 2018 study by the MIT Media Lab found that commercial facial recognition systems had error rates of up to 35% for dark-skinned women, compared to less than 1% for light-skinned men. This means these systems were far more likely to misidentify or fail to recognize people of color, with potentially dire consequences in law enforcement and security applications.

Uncover more details

Unequal Impact Facial recognition bias disproportionately impacts marginalized communities, putting people of color at greater risk of false arrests, surveillance, and other harms.

A Reckoning for Big Tech

As the extent of facial recognition bias has become more widely known, major tech companies have faced intense scrutiny and backlash. In 2020, amid nationwide protests against police brutality and racial injustice, both Amazon and Microsoft announced they would stop selling their facial recognition tools to law enforcement.

But the damage has already been done. Biased facial recognition has been deployed by police departments across the country, leading to wrongful arrests, false accusations, and the over-surveillance of minority communities. And while some progress has been made, much more needs to be done to address the deep-rooted biases inherent in these technologies.

The Fight for Regulation and Reform

A growing coalition of civil rights groups, researchers, and lawmakers are demanding strict regulations and oversight on facial recognition. They argue that the technology should be banned entirely for law enforcement use until the bias problem can be comprehensively addressed.

In 2021, a bipartisan bill was introduced in the U.S. House of Representatives that would impose a moratorium on federal use of facial recognition, mandate bias testing, and establish guidelines for ethical development. Similar efforts are underway at the state and local level, as cities like Boston and San Francisco have already banned the use of the technology.

But with billions of dollars at stake and powerful interests resisting change, the fight for facial recognition reform is far from over. The future of this technology — and its impact on marginalized communities — hangs in the balance.

Found this article useful? Share it!

Comments

0/255