8 Filter Bubbles
The Problem With Personalization
“I’m more concerned about, like, the larger scale trend of predicting what we want, but then also predicting what we want in ways that push a lot of people towards the same cultural and political endpoint” (Anonymous student qtd. in Head et al. 20).
One significant and widely-discussed feature of the Age of Algorithms has been the “personalization” of information we see online. Algorithms will filter, sort, and personalize information in an attempt to provide content that is relevant to our interests, based on, for example, our search histories and patterns of past online activities. This type of algorithmic personalization may actually distort, manipulate, and amplify our own worldviews by creating so-called “filter bubbles” that limit our exposure to outside perspectives.
“Using algorithms to deliver content that we are most likely to enjoy, these platforms reinforce our worldviews and allow us to stay encased in our safe, comfortable echo chambers….The fundamental problem is that “filter bubbles” worsen polarization by allowing us to live in our own online echo chambers and leaving us with only opinions that validate, rather than challenge, our own ideas” (Wardle and Derakhshan 49).
Roger McNamee, an early Facebook investor-turned-critic, considers filter bubbles to be “the most important tool used by Facebook and Google to hold user attention” because they lead to an “unending stream of posts that confirm each user’s existing beliefs.” The result is polarization: “Everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them.” Further, this “continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts.”
For an explanation of this problem, watch Eli Pariser’s “Filter Bubbles” TED Talk [8:49], based on his influential 2011 book of the same name:
Note: Turn on closed captions with the subtitles button or use the interactive text transcript if you prefer to read.
New Research on Filter Bubbles
A 2018 study has called the filter bubble premise into question, finding that polarization could actually increase upon exposure to opposing views (Bail et al.). Most notably, this study found that “Republicans who followed a liberal Twitter bot became substantially more conservative” (Bail et al. 9216). This suggests that exposure to opposing viewpoints (which are often presented with a divisive and moralistic tone in order to capture our attention) may actually backfire, and create further polarization by causing negative or defensive reactions.
An interesting 2017 study examined polarization in the U.S. across different age ranges and surprisingly found that “growth in polarization in recent years is largest for the demographic groups least likely to use the internet and social media”—namely, those over 75 years old (Boxell et al.; emphasis added). Indeed, in a 2020 study, students expressed concerns about the ability of older adults to navigate systems designed for algorithmic attention, with one student noting that “everyone was so focused on making sure that kids learned that they forgot they also needed to teach grandparents” (Head et al. 26).
This idea might be further understood with the results of two 2017 studies which showed that:
- Users of social media and search engines experience more diversity than non-users
- People who are involved in politics online are “more likely to double-check questionable information they find on the internet and social media, including by searching online for additional sources in ways that will pop filter bubbles” (Wardle and Derakhshan 53).
Sources
Bail, Christopher A., et al. “Exposure to Opposing Views on Social Media Can Increase Political Polarization.” PNAS, vol. 115, no. 7, 2018, pp. 9216-21, doi.org/10.1073/pnas.1804840115. Licensed under CC BY-NC-ND 4.0
“Beware Online ‘Filter Bubbles’: Eli Pariser” by TED is licensed under CC BY-NC-ND 4.0
Boxell, Levi, et al. “Is the Internet Causing Political Polarization? Evidence from Demographics.” NBER Working Paper 23258, Mar. 2017, doi:10.3386/w23258.
Head, Alison J., Barbara Fister, and Margy MacMillan. “Information Literacy in the Age of Algorithms.” Project Information Literacy, 15 Jan. 2020. Licensed under CC BY-NC-SA 4.0
Image: “Bubbles in BCN” by Marc Sendra Martorell is in the Public Domain, CC0
Image: “Water Droplets” by Nariman Mesharrafa is in the Public Domain, CC0
Wardle, Claire, and Hossein Derakhshan. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.” Council of Europe, 27 Sept. 2017.
This term refers to the intellectual isolation that can result from algorithms predicting what information you would want to see based on behavioral data like search history, clicks, views, likes, and location. This may limit our exposure to opposing viewpoints and confirm our existing beliefs.