For indispensable reporting on the coronavirus crisis and more, subscribe to Mother Jones’ newsletters.WhatsApp, the world’s largest messaging app, announced today that it is taking new steps to curb the spread of misinformation on its platform, a problem that has become increasingly visible during the coronavirus pandemic. As I wrote in March, the platform has been a “petri dish for misinformation”—an incubator for false information and rumors about COVID-19. The company, which is owned by Facebook, says it will now monitor and limit the dissemination of forwarded messages.
This is a major change for an app that’s known as a place where friends and family send along memes, jokes, images, and multimedia messages. “Is all forwarding bad? Certainly not. We know many users forward helpful information, as well as funny videos, memes, and reflections or prayers they find meaningful,” the company posted on its blog. “However, we’ve seen a significant increase in the amount of forwarding which users have told us can feel overwhelming and can contribute to the spread of misinformation. We believe it’s important to slow the spread of these messages down to keep WhatsApp a place for personal conversation.”
The company says it will now track messages that have been forwarded five or more times and will only allow “highly forwarded” messages to be passed on one more time. The latest beta version of its app also includes a feature that displays a magnifying glass next to frequently forwarded messages so users can check if it has been debunked by journalists or fact-checking sites. “Double checking these messages before forwarding may help reduce the spread of rumors,” wrote Erin Fors of Cutline Communications on behalf of WhatsApp in an email.
The changes follow reports in Mother Jones and other media outlets as well as complaints by fact-checking organizations and governments that the platform was not doing enough to curb misinformation about the coronavirus.
WhatsApp’s encrypted, private chats can make it difficult to trace the origins of misleading viral messages. More than a year ago, researcher Harsh Taneja and his colleague Himanshu Gupta had suggested in the Columbia Journalism Review that WhatsApp could track highly forwarded messages on its platform and flag them as suspicious. A couple of weeks ago, a WhatsApp spokesperson told me that it wasn’t possible to do that. Yet the company’s latest change, say the researchers and CJR, is very similar to what they’d proposed.
Today, @CaseyNewton at the @verge reports that WhatsApp will police misinformation in one of the very ways Harsh and Himanshu wisely suggested in @CJR 18 months ago. https://t.co/zay9JY5Pdo
— Sam Thielman (@samthielman) April 7, 2020