Why YouTube’s Removal of Election Fraud Misinformation Is Good for the Community

Key Takeaways

  • YouTube is striking channels and removing videos that allege there was election fraud.
  • Experts applaud YouTube’s new direction which has had an issue with extremist and disinformation content on its platform in the past. 
  • Detractors suggest this lends credence to accusations of conservative bias and could lead to a slippery slope for unpopular content.
President-Elect Joe Biden on an iPhone
Getty Images/Pixabay

Users might see fewer videos showing up on YouTube spreading disinformation about the 2020 presidential election thanks to YouTube’s new policy. Experts say it’s a win for democracy.

The video-streaming giant made ripples across the political world with an announcement on December 9 signaling it would be removing content that violates policies against misleading viewers about the presidential election. This adjustment comes on the heels of executives making another decision to limit the spread of “harmful misinformation” that could cause real-world harm here and abroad.  

“We also work to make sure that the line between what is removed and what is allowed is drawn in the right place,” the blog post reads. “...We will start removing any piece of content uploaded today (or anytime after) that misleads people by alleging that widespread fraud or errors changed the outcome of the 2020 U.S. presidential election, in line with our approach towards historical U.S. presidential elections.”

YouTube’s New Direction

New policies from social media networks like Twitter and Facebook this year have come down hard on the spread of misinformation and disinformation, imposing new guidelines relating to election information and false claims about the severity and spread of the coronavirus. 

These platforms have had to make the hard decision to take a stand after years of dodging responsibility for misinformation posted to their sites. At the same time, they've had to wrestle with potentially angering users and isolating communities that they've spent years cultivating. 

"The bias that these social media sites ... have is a bias toward engagement."

YouTube’s specific policy shift comes after Democratic leaders urged the company to take more aggressive action against information shared on its website, and could put it in the crosshairs of a bitter political debate. Tech companies have come under fire for anti-conservative bias in the past, even prompting congressional hearings on how they moderate their platforms.

Irina Raicu, director of the Internet Ethics Program at Santa Clara University's Markkula Center for Applied Ethics, said public pressure can go a long way. Users citing harm from the company’s lax moderation policies and internal pressure caused the Google-owned video-streaming platform to do a 180 on years of indecision. As for bias, Raicu said the accusations of anti-conservatism in content moderation are unfounded.

The data seems to support her claim. Data analytics firm CrowdTangle found conservative content on Facebook, the world’s largest social media platform, actually outperforms other content, including legacy media outlets. The true bias in social media is not right or left—it is polarization.

Most Viewed US Election-Related Content on YouTube
YouTube

“The reality is, the bias that these social media sites do have is a bias toward engagement,” Raicu said. “When something is controversial, you’re more likely to spend more time on the service to look at other videos and respond. It tends to amplify extreme voices and extreme points of view and make everybody mad at each other.”

Accusations of misuse and bias following the new policy change are overblown, she says. Content moderation is needed now, more than ever, as objective reality has deteriorated to the point of becoming an opinion. Content moderation comes with errors. YouTube’s appeal process for inappropriately tagged and removed videos could be the remedy.

"It’s not like there is no harm to speech."

The hope among ethicists like Raicu is for YouTube to neutralize the sway of extremist content that has flourished on its platform and caused a breakdown in conventional knowledge. Holocaust denialism and the flat Earth theory previously found community on YouTube before a 2019 reversal banning the former. The tech company deciding to be more proactive is likely to clamp down on the spread of similar extremist content going forward, creating a more well-rounded experience for users.

Slippery Slope?

While experts applaud YouTube’s decision to be a more active force in the content shared on the platform, users are left to pick up the pieces from the past (when YouTube did not police its content in the same way). Its current policy addresses all the years when it didn't take these stands. 

YouTube made changes to its hate speech guidelines a week prior to rolling out the election-related policy changes to address this history. With both of these moves, conservatives say they are next on the chopping block. 

Users from popular conservative corners of the website, such as the controversial channel Liberty Hangout, have announced their departure from the platform following the policy change. The creator of the channel, Kaitlin Bennett, said the move creates a slippery slope for conservative content creators and puts their views in danger of being suppressed. Despite announcing their departure, however, the channel continues to post videos on YouTube, with the most recent one being uploaded on Wednesday.

“There’s always going to be a slippery slope. Whenever you try to do some form of content moderation there’s going to be some errors,” Raicu said. “The harm of people’s videos getting taken down and then, potentially, being able to put them back up later is still less than the harm we’re seeing from the unfettered spread of misinformation and direct harassment. It’s not like there is no harm to speech.”

Was this page helpful?