How Algorithmic Bias Can Hurt Teens

Developing brains are at risk

Key Takeaways

  • Algorithmic bias is harmful to teenagers who spend a lot of time on the internet, experts say.
  • Twitter users recently encountered a problem in which Black faces were culled in favor of white ones.
  • Teenagers’ developing brains may be particularly susceptible to the damaging effects of algorithmic bias, researchers say.
Young woman looking down while jumping on broken pink block stacks against white background.
Klaus Vedfelt / Getty Images

The prejudice baked into some technology, known as algorithmic bias, can be harmful to many groups, but experts say it's particularly damaging to teens.

Algorithmic bias, when computer systems show prejudiced results, is a growing problem. Twitter users recently found an example of bias on the platform when an image-detection algorithm that crops photos was cutting out Black faces in favor of white ones. The company apologized for the issue, but has not yet released a fix. It’s an example of the bias that teenagers face when they go online, which they do more than any other age group, experts say.

"Most teens are unaware that social media companies have them in place to promote specific content that they think users will like [in order] to get them to stay as long as possible on the platform," Dr. Mai-Ly Nguyen Steers, assistant professor in the School of Nursing at Duquesne University who studies social media usage among adolescents/college students, said in an email interview.

"Even if there is some level of consciousness about the algorithm, the effect of not getting enough likes and comments is still powerful and can affect teens' self-esteem," added Steers.

Developing Brains

Algorithmic bias may affect teens in unforeseen ways since their prefrontal cortex is still developing, Mikaela Pisani, Chief Data Scientist at Rootstrap, explained in an email interview.

"The effect of not getting enough likes and comments is still powerful and can affect teens' self-esteem."

"Teens are especially vulnerable to the phenomenon of the 'Social Factory', where algorithms create societal clusters on online platforms, leading to anxiety and depression if the teen's needs of social approval are not met," said Pisani. "Algorithms simplify based on previous imperfect data—leading to an overrepresentation of stereotypes at the expense of more nuanced approaches to identity formation.

"Taking the broader viewpoint, we are also left to question, as a society, if we want algorithms shaping our teens’ journeys into adulthood, and does this system even support rather than stifle individual personal growth?"

Because of these problems, there’s a growing need to keep teenagers in mind when designing algorithms, experts say. 

"Based on input from developmental specialists, data scientists, and youth advocates, 21st-century policies around data privacy and algorithmic design could also be constructed with adolescents’ particular needs in mind," Avriel Epps-Darling, a doctoral student at Harvard, wrote recently. "If we instead continue to downplay or ignore the ways that teens are vulnerable to algorithmic racism, the harms are likely to reverberate through generations to come."

Combating Bias

Until there is a solution, some researchers are trying to find ways to lessen the damage done to young people by biased algorithms. 

"Interventions have been focused on making teens recognize their social media patterns are negatively impacting their mental health and trying to come up with strategies to mitigate that (e.g., reduced social media use)," Steers said.

"Some of the college students we have interviewed have indicated they feel compelled to generate content to remain "relevant," even if they do not want to go out or post," she continued. "However, they feel they need to generate content to maintain their connections with their followers or friends."

The ultimate answer could be removing human bias from computers. But since programmers are only human, that's a tough challenge, experts say. 

One possible solution is to develop computers that are decentralized and programmed to forget things that they have learned, says John Suit, Chief Technology Officer at robotics firm KODA

"Through a decentralized network, data, and the analytics of that data, are being compiled and analyzed from multiple points," Suit said in an email interview. "Data is being collected and processed not from a single AI mind processing within the limits of its algorithm, but hundreds or even thousands.

"As that data is collected and analyzed, old "conclusions" or superfluous data is forgotten. Through this system, an algorithm that may have started with bias will eventually correct and replace that bias if it proves to be wrong."

While bias may be an age-old problem, there may be ways to combat it, at least online. Designing computers that shed our prejudices is the first step.