Research | Fall 2023 Issue

Is AI racist?

How technology perpetuates racism and whether anything can be done about it

By Jean Merl

AS A GRADUATE STUDENT AT THE UNIVERSITY OF ILLINOIS library school more than a decade ago, Safiya Noble was stunned to see that so many of her associates regarded the internet as the “new public library.” Noble had worked in marketing and advertising during the time when the use of search engines was growing vigorously. She saw things differently.

“I had just left working in advertising, where we’re completely trying to game the system to get our clients on the front page of any given search engine,” Noble said in a recent interview. “I’m relating to [search engines] like they’re media distribution channels for television and radio and print. I’m not relating to them like they are big knowledge epicenters, like some of my professors are.”

Then came the 2011 publication of Siva Vaidhyanathan’s watershed book, The Googlization of Everything (And Why We Should Worry). It confirmed Noble’s suspicions. And it launched her on a path to uncover the hidden biases that taint internet searches and undermine the integrity of the quickly emerging field of artificial intelligence.

Now at UCLA, Safiya Umoja Noble is director of the university’s Center on Race and Digital Justice, a co-founder of the Center for Critical Internet Inquiry’s Minderoo Initiative on Tech & Power (with Sarah T. Roberts) and interim director of UCLA DataX, which seeks to broaden the university’s expertise in data science. She is an interdisciplinary professor in the departments of gender studies, African American studies and information studies.

She is at the forefront of a growing movement to expose and mitigate internet biases. Her research centers on digital media and its impact on society. Her TV and radio appearances include NPR (with a recent interview on how artificial intelligence could perpetuate racism, sexism, and other biases), ABC News, NBC News, and CNN. She has been featured in The New York Times, The Guardian, Vogue, Rolling Stone, Fortune and Ms., among others. Much of her work can be found on her website, safiyaunoble.com.

NOBLE GREW UP IN FRESNO, THE DAUGHTER OF A WHITE mother and Black father, and earned a bachelor’s degree at California State University, Fresno. She had to defer her dream of an academic career because of family illness, so she dropped out of graduate school and went to work. When she got laid off ahead of the Great Recession of 2008, she got married, moved to the Midwest, and resumed her academic studies.

When Noble began her research about a decade ago, she met a lot of resistance in the form of the prevailing notion that algorithms — the formulae behind search engines and many social media platforms — are based on mathematics and are therefore objective and unbiased. But Noble demonstrated time and again that the builders of algorithms are human beings who bake their own biases and intentions into the job.

“The computing industry came to be dominated and controlled by White men,” Noble said in an interview with Vogue in 2021, shortly after she won a MacArthur “genius” grant. “They reconsolidated and reinscribed their power” via such technologies as Google search.

Her first jarring brush with such bias had come during her graduate school days when she searched on “Black girls” to find activities that would interest her stepdaughter and friends. Up popped links to pornography involving Black women. Other links wondered why Black females were so “angry,” “loud,” “mean,” and “lazy,” and gave them other negative qualities. Noble’s Ph.D. dissertation was an outgrowth of that experience and led to the 2018 publication of her widely acclaimed book, Algorithms of Oppression: How Search Engines Reinforce Racism.

Since then, Noble and others have gained traction in their campaign to make the internet less biased and more inclusive. Now she is raising some of these concerns about artificial intelligence and its bots, which can influence every aspect of modern life, including healthcare, finance and loan decisions, the justice system, news, and entertainment.

Initial hostility

AROUND 2012, WHEN NOBLE BEGAN ARGUING HER POINT at conferences, “People were absolutely hostile … to the notion that algorithms could be racially biased,” she said. “People believed that algorithms were purely math, and that math could not discriminate. If you were finding porn, it was your fault. If you were finding sexism,” it certainly wasn’t a coding problem.

A decade later, however, things have changed, and those in computer science and computer engineering are striving to improve their work, thanks mainly to those who have exposed the biases, Noble said. She added that many of them were women and/or people of color who risked their jobs with Internet companies to expose the wrongs.

Noble has long called for regulation of the technology industry, and she welcomes a growing consensus that some regulation could be effective, an idea that once was almost taboo.

After a decade of pressing, mainly by scholars and journalists, “We now have the ability to talk about regulation in the United States,” Noble said. “And now the tech leaders themselves are calling for regulation because they know their systems can be dangerous, and they themselves don’t know how to fix it.” They are looking to others, namely Congress, she said, to solve the problem.

While she welcomes efforts by some tech companies to respond to public pressure by self-regulating, she said such a solution feels “a little bit like the fox guarding the henhouse.”

Government regulation, not only in the United States but also internationally, she said, is “very important.”

Ideas for regulation

NOBLE WANTS TO OUTLAW DATA BROKERING, THE PRACTICE of obtaining information on users, aggregating it, and enhancing it to provide to clients or customers. She also calls for regulation of predictive analytics, a system of making predictions about future outcomes using historical and other data. Companies use predictive analytics to identify risks and opportunities, say, in making loan or investment decisions.

She and other experts also want Congress, when formulating regulations, to listen to a wide range of experts, not predominantly the technology companies.

But even if Congress taps a range of sources and approaches these questions intelligently, it will not be an easy task. Stripping AI and algorithms of racism and sexism through regulation, for instance, would likely run into First Amendment concerns. Expressing racism, for example, is unsavory but often protected by the Constitution. When that speech becomes action, however, legal protections evaporate. It is legal to be a racist but illegal to deny someone a loan or benefit based on race or gender, or to refuse them an apartment or a job. Discrimination is illegal, even though racism, per se, is not.

Many of those concerned about discrimination are looking to strong monitoring and enforcement of existing laws. The U.S. Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, the Justice Department’s Civil Rights Division and the Federal Trade Commission in April issued a joint statement on “Enforcement Efforts Against Discrimination and Bias in Automated Systems.”

The agencies “reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the statement said in part. “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

Toward critical engagement

IN A 2017 ARTICLE ON HER WEBSITE, NOBLE CALLED THE potential harms of artificial intelligence “the next human rights issue of the 21st Century.”

“The greater challenge before us will not be access to the internet, but freedom from machine-based decision making and control over our lives,” she wrote.

“The role of algorithms in shaping discourse about people and communities, or in everyday decisions like access to credit, mortgages or school lottery systems is only the beginning.”

She called for “more thoughtful and rigorous approaches” to the use of artificial intelligence. “We need to engage more critically in how these technologies will only further discrimination and oppression around the world.”

During the current academic year, Noble, who is on sabbatical from teaching, is concentrating on her role as inaugural interim director of DataX, which is UCLA’s initiative to expand student opportunities to work with data and help researchers incorporate data analysis into their work.

Noble said part of her role is breaking barriers to collaborative use of the internet in three key areas: fundamental data science, applied and innovative parts of data science, and data justice issues — “getting faculty to work and talk and collaborate across those areas.”

“That’s really been my priority, to support that,” she added.

On many days, Noble said, she wakes up wishing that she was wrong about the harms and abuses she has found in her research.

“You don’t want these terrible things to be happening,” she said. Yet she is “grateful that there is more visibility to all of the women who are trying to be an early warning detection system” by exposing the harms they have found, often in their own employment.

And she is grateful for a growing awareness of digital harms and a willingness to do something about them.

“I do feel we are reaching a tipping point,” Noble said, an awareness that “something is awry and we can do something about it.”

Jean Merl

Jean Merl

Jean Merl worked as an editor and reporter for 37 years at the Los Angeles Times, specializing in local and state government and politics and K-12 education. Since retiring from the Times, she has contributed to the publications of several UCLA entities, including the School of Nursing and the Graduate School of Education and Information Studies. Merl is also a proud Bruin (MA, 1972).

Post navigation

Related

research

The Human Impact of Tech

Keeping humans in mind as technology marches ahead