5 Algorithmic Bias
“Although the impulse is to believe in the objectivity of the machine, we need to remember that algorithms were built by people” (Chmielinski, qtd. in Head et al. 38).
Overview & Examples
Because we often assume that algorithms are neutral and objective, they can inaccurately project greater authority than human expertise. Thus, the pervasiveness of algorithms—and their incredible potential to influence our society, politics, institutions, and behavior—has been a source of growing concern.
Algorithmic bias is one of those key concerns. This occurs when algorithms reflect the implicit values of the humans involved in their creation or use, systematically “replicating or even amplifying human biases, particularly those affecting protected groups” (Lee et al.). In search engines, for example, algorithmic bias can create search results that reflect racist, sexist, or other social biases, despite the presumed neutrality of the data. Here are just a few examples of algorithmic bias (Lee et al.):
- An algorithm used by judges to predict whether defendants should be imprisoned or released on bail, was found to be biased against African-Americans.
- Amazon had to discontinue using a recruiting algorithm after discovering gender bias: The algorithm was penalizing any resume that contained the word “women’s” in the text, because the data was based on resumes historically submitted to Amazon, which were predominantly from white males.
- Princeton University researchers analyzed algorithms and found that they picked up on existing racial and gender biases: European names were perceived as more pleasant than those of African-Americans, and the words “woman” and “girl” were more likely to be associated with the arts instead of science and math.
- Numerous articles have examined the role that YouTube’s recommendation algorithm might play in radicalizing viewers.
Challenging the Algorithms of Oppression
Dr. Safiya U. Noble, Associate Professor at UCLA (Departments of Information Studies and African American Studies) is the author of the book, Algorithms of Oppression: How Search Engines Reinforce Racism. She is also Co-Director of the UCLA Center for Critical Internet Inquiry, and co-founder of the Information Ethics & Equity Institute. In the video below [3:43], Dr. Noble discusses her findings about algorithmic bias in Google search results, particularly for women of color.
Note: This video is auto-captioned. Accurate captions are available at the Amara version. Use the text transcript if you prefer to read.
Fighting Bias in Algorithms
Joy Buolamwini, MIT researcher, Rhodes Scholar, Fulbright Fellow, poet of code, and founder of the Algorithmic Justice League, found that the algorithms powering facial recognition software systems were failing to recognize darker-skinned complexions, because they were based on data sets that were largely white and male. Now she’s committed to fighting bias in machine learning, which she calls the “coded gaze.” In the following video [8:44], she explains her work with facial recognition and also asks important questions about how algorithms influence critical decisions, like: Who gets hired or fired? Do you get that loan? Do you get insurance? Are you admitted into the college you wanted to get into? Do you and I pay the same price for the same product purchased on the same platform?
Note: Turn on closed captions with the subtitles button or use the interactive text transcript if you prefer to read.
Weapons of Math Destruction
Cathy O’Neil has written several books on data science, including Weapons of Math Destruction. She was the former Director of the Lede Program in Data Practices at Columbia University Graduate School of Journalism. In the following video [13:11], she explains how algorithms are not fair and objective, and may in fact “automate the status quo” and “codify” sexism and bigotry. She concludes that these secret “black box” algorithms, created by private companies, can hide ugly truths, often with destructive results.
Note: Turn on closed captions with the subtitles button or use the interactive text transcript if you prefer to read.
Sources
“Algorithms of Oppression, Faculty Focus: Safiya Umoja Noble.” YouTube, uploaded by USC Annenberg, 28 Feb. 2018.
“The Era of Blind Faith in Big Data Must End: Cathy O’Neil” by TED is licensed under CC BY-NC-ND 4.0
Head, Alison J., Barbara Fister, and Margy MacMillan. “Information Literacy in the Age of Algorithms.” Project Information Literacy, 15 Jan. 2020. Licensed under CC BY-NC-SA 4.0
“How I’m Fighting Bias in Algorithms: Joy Buolamwini” by TED is licensed under CC BY-NC-ND 4.0
Lee, Nicole Turner, Paul Resnick, and Genie Barton. “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms.” Brookings, 22 May 2019.
Text adapted from “Digital Citizenship” by Aloha Sargent and James Glapa-Grossklag for @ONE, licensed under CC BY 4.0
Occurs when algorithms reinforce or even amplify racist, sexist, or other social biases.