Social Media May Never Stop Fake News in Elections

Misinformation and fake news spread far too fast

Key Takeaways

  • Disinformation campaigns have spread on social media networks during the days after the election, despite platforms' efforts to prepare for them. 
  • Fake and false news has spread to thousands of users on social media. 
  • Experts say as long as we have social media, we can expect disinformation campaigns, especially during major events like an election.
People using smartphones while standing in line to vote.
SDI Production / Getty Images 

Election Day and the days that have followed have seen misinformation and false news spread across social media networks, and experts say we are doomed for more slip-ups like these as long as social media is around. 

From YouTube to Instagram and hashtags spread across Facebook and Twitter, social media networks have been busy battling disinformation campaigns this week. Despite safeguards and policies put into place by social media companies—everything from labeling misinformation to banning deepfakes and ramping up content moderation—disinformation still slipped through the cracks. 

"Content moderation steps are working to an extent in opposing some friction, but this is part of a new dynamic we’re going to face as long as people use social media," said Emerson Brooking, a Resident Fellow at the Atlantic Council's Digital Forensic Research Lab, during a virtual press briefing for the Election Integrity Partnership

What Went Wrong? 

Even before the ballot counting started, disinformation and false news popped up across social media platforms. Although the situation is nowhere near as bad as 2016, the disinformation being spread has resulted in false claims about the legitimacy of the election and the Democratic process. 

Most notably, on Election Day, some YouTube channels live-streamed fake election results, according to Insider. Insider reported when searching YouTube for "presidential election results," the top four videos showed a fake graphic of electoral college projections. 

Protesters, police, members of the media and others converge outside of the Philadelphia Convention Center as the counting of ballots continues in the state.
Spencer Platt / Getty Images 

YouTube was relatively quick to remove the livestreams, citing that they violated their guidelines on spam, deceptive practices, and scams. Still, the damage was already done since "tens of thousands of viewers" reportedly tuned in, and money was made from the ads that played before the videos. 

Hashtags like #stopthesteal (which falsely claims that Democrats are purposefully stealing the election) and #sharpiegate (which falsely claims that using a Sharpie invalidates a ballot) were also able to spread like wildfire across social networks, particularly Twitter. The Election Integrity Partnership said that the rapid growth of #sharpiegate on Twitter was driven by a number of smaller, unverified user accounts

Another minor slip-up was that some Instagram users reported seeing  a "Tomorrow is Election Day" reminder at the top of their feeds on Election Day itself, according to Protocol. Instagram attributed the issue to people not restarting their apps, but an Instagram spokesperson declined to comment to Protocol how many users were affected by the glitch. 

While all this disinformation was more noticeable given the election this week, experts say it doesn't just exist during significant events. 

"I think misinformation doesn't just exist in every election—it just becomes really obvious during election season, but it’s a problem that’s always with us," said Mike Horning, a professor in the Department of Communication at Virginia Tech, during a phone interview. 

Will Social Media Ever Get It Right? 

Experts agree that misinformation is here to stay but are overall hopeful that social media networks may eventually get it right by learning from their mistakes. 

"We have a clear idea of what it looks like to get it really wrong, and we have the ability to measure progress against that… We've been able to track what getting it better looks like," said Camille Francois, Chief Innovation Officer at Graphika, during a press briefing. 

Alex Stamos, an adjunct professor at Stanford University's Center for International Security and Cooperation and former chief security officer at Facebook, said that it’s easier for social media companies to focus on not making the disinformation worse rather than catching it before it even happens. 

"From my perspective, the things [social networks] do around amplification, around recommendations and such, that’s the place that they should really focus on making sure that their product doesn't make the situation worse," Stamos added in the briefing. 

Twitter, TikTok, WhatApp, Instagram, Threads, Snapchat, Facebook, Messenger and Telegram application logos are displayed on the screen of a smartphone.
Chesnot / Getty Images 

Horning agrees with Stamos, and said that social networks have a good grasp on catching and labeling misinformation, but they should also focus on stating where this misinformation comes from. 

"I don't think we solved the problem yet. We've got a couple of different challenges, including more transparency about where the information has come from," he said. "At the same time, we have to put some of the responsibility back on the public and make sure that people are becoming more critical consumers of the content that they get."

Overall, experts agree we haven’t figured it out yet, and that misinformation during elections will just be something we have to deal with—for now. Graham Brookie, Director and Managing Editor of the Atlantic Council's Digital Forensic Research Lab, said during the press briefing that disinformation is designed to slip through the cracks and that it always will. 

"We haven't all collectively figured this out yet, and I think we are probably a long way from doing that, which is why we talk a lot about resilience as opposed to solving this once and for all," Brookie said.