How WhatsApp is Combating Misinformation and Disinformation

How WhatsApp is Combating Misinformation and Disinformation

WhatsApp, the popular messaging app owned by Meta, is taking steps to combat the spread of misinformation and disinformation on its platform. The company has recently rolled out several new features aimed at reducing the spread of problematic content.

One of the main changes being tested is a new feature that will make it more difficult for users to forward messages. For instance, misinformation and disinformation regarding Brazil’s democratic process was spread at an alarming rate, and false or misleading information was forwarded to hundreds and hundreds of people, per The Guardian, which reported that  “In a sample of 11,957 viral messages shared across 296 group chats on the instant-messaging platform in the campaign period, approximately 42% of rightwing items contained information found to be false by factcheckers.” The news organization noted that “Less than 3% of the leftwing messages analysed in the study contained externally verified falsehoods.”

WhatsApp once allowed messages to be forwarded to up to 256 groups at once; which is bad news for spreading disinformation quickly. In response to the growing issue in Brazil, WhatsApp in Brazil cut that number to 20 in 2018 and five in 2019, and eventually to 1 in 2020, though this only applied to messages that had already been forwarded over 4 or 5 times. In 2020, the company announced that “highly forwarded messages” had dropped by 70 percent.

“Is all forwarding bad? Certainly not… However, we’ve seen a significant increase in the amount of forwarding which users have told us can feel overwhelming and can contribute to the spread of misinformation. We believe it’s important to slow the spread of these messages down to keep WhatsApp a place for personal conversation,” said WhatsApp’s Public Relations department in a statement.

Meta has taken steps across other platforms to introduce doubt into widely-shared misinformation and disinformation. On Facebook, small warning signs have been added to posts sharing articles believed to contain harmful misinformation or disinformation. This is useful because it does not preclude users from reading the content or engaging with it—which would be a violation of several tenets of free speech—but it does cause the user to have to stop and think before clicking through the article. 

In addition to making the forwarding of information more difficult, by making the act of sharing and spreading these false articles inconvenient, WhatsApp has effectively regulated it by taking away the convenience that misinformation and disinformation relies on to spread.

Meta has faced criticism in the past for its handling of misinformation and disinformation on its platform. The company has been accused of allowing false information to spread unchecked, and of not doing enough to prevent the spread of harmful content. This came to a head during the height of the COVID-19 pandemic, when articles containing false or misleading statements regarding the COVID-19 Pfizer and Moderna vaccine series or else Dr. Anthony Fauci, leader of the United States’s COVID-19 response, were being shared at an alarming rate. NPR estimates nearly a third of the over one million deaths the United States suffered during the pandemic could have been prevented if false information had not been allowed to spread unchecked. 

Overall, the changes being tested by WhatsApp represent an important effort to address the growing problem of misinformation and disinformation on social media. While it remains to be seen how effective these measures will be, they represent a step in the right direction and could help reduce the harm going forward after the travesty of COVID-19.