Racial bias in AI should be the immediate concern

Detecting racial bias in AI tools is not rocket science. (Shutterstock)

Bias in AI already hurts people of colour in many ways big and small. Yet far more attention is paid to AI experts talking about potential risks.

by Gideon Christian. Originally published on Policy Options
December 11, 2024

Jazmin Evans, a Black PhD student in the United States, experienced a four-year delay in receiving a kidney transplant after being put on a wait list much later than she should have been.

This was caused by a racially biased algorithm used in medical-risk calculations. Some artificial intelligence (AI) tools in the medical field apply “race norming” to risk assessments, adjusting calculations based on race, often to the detriment of patients of colour. Many health-care professionals are unaware of ingrained racial biases in many medical AI systems.

Racial bias in medical AI is one of many AI problems affecting people of colour. But detecting racial bias in AI tools is not rocket science.

A critical examination of many sectors deploying this technology is likely to uncover such biases. The growing adoption of AI underscores the urgent need to examine its impact on people of colour.

Across sectors, bias abounds

In law enforcement, predictive policing involves the use of AI to predict high-risk areas. This approach uses data from historically over-policed minority communities, which results in a biased representation of these areas as prone to crime.

In the criminal justice system, AI tools used to predict the chance of reoffending (recidivism) have been shown to over-predict risks for Blacks compared to their white counterparts. Similarly, AI facial recognition technology used in law enforcement has shown high error rates in accurately identifying people of colour.

In the education sector, AI facial recognition used for exam proctoring during the COVID-19 pandemic failed to accurately identify darker-skinned students, which led to increased stress and unfair testing conditions for them.

In employment and human resources, AI tools such as résumé- screening software and interview bots used to streamline recruitment have been shown to amplify racial biases. For instance, résumé software, often using data from predominantly white applicants, may inadvertently exclude minority candidates due to racial proxies such as names, affiliations or postal codes.

AI bias must move to accountability to address inequity

The missing voices at AI conferences

Facial recognition technology requires checks and balances

In the financial sector, AI models for assessing creditworthiness can discriminate against racial minorities because they have been trained on biased historical data like redlining.

Black loan applicants were 80 per cent more likely to be denied a mortgage than comparable white individuals in a 2021 investigation by the U.S. non-profit newsroom The Markup.

AI tools built on such data will subtly perpetuate discriminatory outcomes. Even without explicit racial data in their training, these tools can still learn racial bias through clues such as names and postal codes. This can perpetuate economic inequality by affecting loan-approval rates and terms.

Calvin Lawrence, an AI software engineer at IBM, describes in his book how he tried to use an AI-powered soap dispenser in a public washroom. It failed to work because the device had been trained on predominantly white skin tones and could not recognize his darker skin.

This example underscores how deeply ingrained racial bias in AI can affect even the most mundane aspects of daily life for people of colour.

A troubling disparity

For years, Timnit Gebru, Joy Buolamwini, Rumman Chowdhury, and Seeta Peña Gangadharan have warned about racial bias in AI. The concerns of these women of colour have struggled to gain significant attention.

In contrast, when several wealthy tech executives — including former Google executive and recent Nobel Prize winner Geoffrey Hinton — published open letters in the summer of 2023 warning of AI’s potential global threats, their concerns received significant media attention.

In an interview with CNN, Hinton downplayed Gebru’s concerns about AI bias, which she raised before being fired from Google. Hinton suggested that Gebru’s concerns were less existential than those raised in his group’s letter.

This highlights a troubling disparity: the immediate risks AI poses to people of colour often receive less attention than the theoretical threats emphasized by some of AI’s top researchers.

While it is important to consider AI’s future risks, we must not ignore the immediate and real threat it poses to people of colour. As Lawrence noted in his book: “(W)hen (AI) tech goes wrong, it often goes terribly wrong for people of colour.”

A “tech civil-rights movement”

Canada’s House of Commons is considering Bill C-27, a draft law that aims in part to regulate AI in Canada. It is concerning that the debate is progressing without any robust discussion on how to address racial bias in AI.

This oversight is unacceptable given the potential impact of these technologies on racial equity and fairness.

The time is ripe for a “technological civil-rights movement” dedicated to advocating for the ethical development and deployment of artificial intelligence and ensuring it promotes racial justice rather than undermine it.

It is imperative that we critically examine the technologies we adopt and their effect on society.

Having made significant efforts to address past racial injustices such slavery and segregation, the lessons from the past must guide our path forward.

We must ensure that the digital future we build today is inclusive, equitable and reflective of our highest aspirations as Canadians.

This article first appeared on Policy Options and is republished here under a Creative Commons license.

0 Shares