How Social Media Platforms Should Work to Stop Racist Content

A new British bill points the way for the US

  • Experts say that hate speech is increasing on social media platforms.
  • The British government is proposing fines for social media companies that don’t limit racist content. 
  • Not everyone agrees that legislation is the best way to police online hate speech.
Portrait of a teenager of a different ethnicity being bullied by other teens via social media.

Daisy-Daisy / Getty Images

The U.K. government is trying to block racist social media content, and some observers say similar measures are needed in this country.

Social media platforms that fail to stop sexist and racist content could receive heavy fines under a recently proposed bill. Facebook, Twitter, and similar sites must also give users the option of avoiding content that is considered harmful. The move comes as Twitter and other websites come under increasing scrutiny for hosting racist comments. 

"We need policies in place by social media companies that have a no-tolerance policy around racist comments and that can decipher less egregious forms of racism," diversity consultant Kim Crowder told Lifewire in an email interview. "We also need to see creators of color not have their content hidden based on their desire to talk about racism online. Lastly, there needs to be accountability and penalties for those hate-driven accounts."

A Drive to Stop Hate

The UK proposal would regulate platforms in an attempt to halt the spread of online hate. If companies fail to comply with the requirements, they could be forced to pay fines of up to ten percent of global annual revenue.

"The Bill will instead give adults greater control over online posts they may not wish to see on platforms," UK culture secretary Michelle Donelan said in a news release. "If users are likely to encounter certain types of content—such as the glorification of eating disorders, racism, antisemitism or misogyny not meeting the criminal threshold—internet companies will have to offer adults tools to help them avoid it. These could include human moderation, blocking content flagged by other users, or sensitivity and warning screens."

The nonprofit Center for Countering Digital Hate estimates that Facebook, Instagram, TikTok, Twitter, and YouTube fail to act on 84 percent of user reports of clear antisemitic content and 89 percent of anti-Muslim hatred.

"Social media companies are putting profit before people, maximizing the money they make from users like us without doing the bare minimum to keep us safe," the group wrote on its website. "That affects all of us, contributing to problems from racist abuse to dangerous health misinformation to self-harm and eating disorder content that can ruin young people's lives."

The UK bill has been criticized for restricting free speech and has been revised in an attempt to overcome opposition. A new version of the bill drops provisions that would ban "legal but harmful material."

Free or Controlled Speech?

Social media hate speech has become a contentious topic in the US in recent months. Twitter has reportedly seen an increase in racist content since its purchase by tech mogul Elon Musk. The Network Contagion Research Institute, which analyzes social media content, said use of the n-word on the app spiked nearly 500 percent over the 12 hours after Musk's deal was finalized.

The Human Rights Campaign, an LGBTQ civil rights group, expressed concern about Musk's purchase of the social media giant. "Twitter has a right, and a responsibility, to keep its platform from being exploited to fuel a dangerous media environment," the group said in a news release. "This isn't about censorship or discrimination of ideas—it is about what kind of company they want to be and what kind of world they want to shape."

Until society learns the benefits of building people up rather than destroying them, we'll stay in this performative cycle.

US diversity consultant Kim Clark said in an email interview that hate speech can lead to violence. 

"Free speech is not the same thing as hate speech," Clark added. "People have the freedom to say something. However, if the language is hateful, dehumanizes individuals, or incites violence—especially towards a group of people—it can have consequences. It puts the targets of the speech into dangerous positions. We must recognize that language leads to behavior."

But, Clark said, the UK legislative response to sexist and racist content wouldn't work in the US, adding, "It would just be a band-aid that covers the problem." 

Instead of imposing legislation, Clark said the US should address why people say sexist and racist things on social media. She pointed to studies that cite media and representation's impact on how people perceive different groups.

"What's the benefit of saying them? How do family and friends reward the behavior?" Clark added. "When we address the need people feel to deliver hate speech [then] we can cut it off at the source. Until society learns the benefits of building people up rather than destroying them, we'll stay in this performative cycle."

Was this page helpful?