How Social Media Companies Are Trying to Stop Abuse

New filters could stop unwanted contacts

Key Takeaways

  • Social media companies are seeking ways to prevent users from getting abuse. 
  • Instagram is making it easier to prevent unwanted comments and direct messages. 
  • Online abuse is common, according to a Pew Research Center survey.
A student dealing with a bullying text message.

SolStock / Getty Images

Social media companies are trying to crack down on the continuing problem of abuse on their platforms.

Instagram said recently that it would make it easier to prevent unwanted comments and direct messages on the photo and video-sharing social networking service. Users can now automatically filter out offensive content and hide comments and direct message requests from specific users.

"If social media companies do not crack down on abusive messaging, they will only end up with abusive users abusing each other, and nobody actually will consume social media content in a reasonable fashion," Thomas Roulet, a professor at the University of Cambridge who studies social media problems, told Lifewire in an email interview.

"They will basically lose the good and valuable users in the longer run."

Attacks Are Rampant

Sports have been a recent flashpoint for social media attacks. After the Euro 2020 final, angry fans attacked England footballers on Instagram following the team’s defeat. The incidents, which included racist comments and emoji, spotlighted the helplessness of Instagram users to prevent attacks on the platform. Model Chrissy Teigen deleted her Twitter account in March after complaining of abuse on the platform.

"We developed this feature because creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know," head of Instagram Adam Mosseri wrote in a blog post.

Instagram’s new Limits feature is meant to help prevent abuse by allowing users to choose who can interact with them during busy times. Users can turn on restrictions for accounts that are not following them and those belonging to recent followers. When Limits are enabled, these accounts can’t post comments or send DM requests for a specified period.

Another Instagram feature called Hidden Words, designed to protect users from unwanted DM requests, also is being expanded. Hidden Words automatically filters requests that contain offensive words, phrases, and emojis. The filter puts things you don’t want to see into a hidden folder, where you can decide if you want to see them. It also filters out requests that are likely spam or are otherwise low-quality.

Instagram has updated its Hidden Words database with new types of offensive language, including strings of emoji, and included them in the filter, Mosseri said. The feature has been launched in select countries and will be available worldwide by the end of the month.

Twitter Considers Ways to Prevent Abuse

Other social media platforms are also considering anti-abuse measures.

Twitter is investigating ways to help users prevent unwanted attention. The company’s notification system alerts a user when they’ve been directly tagged in a tweet. The feature is helpful if the tweet is interesting. But abusive content can lead to cyberbullying.

Young adult sitting on the floor against a wall of windows with a smartphone in their lap, and tears on their face.

FluxFactory / Getty Images

The company has said it is considering various ways to prevent abuse, including letting users "unmention" themselves. This ability would allow users to remove their name from another’s tweet so they’re no longer tagged in it and would keep unwanted comments from appearing in their feed.

Online abuse is common. A recent Pew Research Center survey found that 41% of Americans have personally experienced some form of online harassment. When asked to rate how well these companies address online harassment or bullying on their platforms, just 18% said that social media companies are doing an excellent or good job.

Roulet said that abuse on social media is a difficult problem to fix. The first points of contact are the abused users who can report abusive messages. Once a user is reported multiple times, the IP address can be banned.

"Importantly, as social media companies accumulate data on reported messages, they might be able to automate and improve the detection of offensive messages using machine learning, with the risk that they sometimes censor acceptable content," Roulet added.

Mosseri said Instagram would "invest in organizations focused on racial justice and equity."

"We know there’s more to do, including improving our systems to find and remove abusive content more quickly and holding those who post it accountable," Mosseri wrote.

Was this page helpful?