Featured
John Wihbey, an associate professor of media innovation and technology at Northeastern University, says the policy changes could have downstream effects — not only in the U.S., but elsewhere around the world.
Meta’s move away from fact-checking in content moderation practices could potentially allow more hate speech or mis- or disinformation, a Northeastern University social media expert says.
Meta is adopting a model similar to the one used by X, called Community Notes.
John Wihbey, an associate professor of media innovation and technology at Northeastern University, sees the move as the company repositioning itself ahead of President-elect Donald Trump’s inauguration. But third party fact-checking, while difficult to scale on a platform with billions of users, “is an important symbol of commitment to trust and safety and information integrity,” Wihbey says.
It is “dangerous,” he says, to break from those norms at a moment when “the winds of authoritarian populism are blowing across the globe.”
In a video message, Meta founder and CEO Mark Zuckerberg described the shift as part of an effort to “get back to our roots around free expression,” noting, among other things, that the company’s fact-checking system has resulted in “too many mistakes and too much censorship.”
View this post on Instagram A post shared by Mark Zuckerberg (@zuck)
A post shared by Mark Zuckerberg (@zuck)
He also cited the 2024 presidential election, describing the election of Trump as a “cultural tipping point” toward “once again prioritizing speech.”
On X, the Community Notes model uses crowdsourced input from users to fact-check posts, usually in the form of added context. Wihbey described Zuckerberg’s announcement as confusing, noting that Meta’s third party fact-checkers played a minimal role in day-to-day moderation when compared to its sophisticated algorithmic tools, which can sometimes result in false positives or negatives.
As part of the policy shift, Meta says it will scale back its content moderation algorithms.
In addition, the company says that it wants to pivot to a more laissez-faire approach to civic or political content after tightening controls in recent years to curb the spread of mis- and disinformation.
“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” the company said in a statement. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”
The policy changes could have downstream effects — not only in the U.S., but elsewhere around the world, warns Wihbey, whose forthcoming book, “Governing Babel: The Debate over Social Media Platforms and Free Speech – and What Comes Next,” delves into content moderation and free speech.
Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.
“The analogy I would use is that they’re simultaneously standing down the police while opening up the floodgates for crime,” Wihbey says. “And that’s a dangerous mix.
“I think it’s a real step backwards in terms of trust and safety and platform integrity,” he adds. “At the same time, I understand pragmatically why Mark Zuckerberg and their leadership are moving in this direction.”
Still, Wihbey described the policy reversal as short-sighted because it is too narrowly focused on the U.S. market. Meta’s products, he notes, are used worldwide — accounting for about $134.9 billion in global revenue.
“The Meta product suite, which is Facebook, Instagram and WhatsApp Messenger, is very, very important to civil society, political debate, human rights work and journalism, so this is going to have a lot of second-order consequences beyond the shores of the United States,” Wihbey says.
Wihbey notes that, with the U.S. set to potentially ban TikTok, other countries may feel emboldened to shut out U.S.-based platforms.
With the policy yet to be implemented, it remains to be seen how these changes would play out. Wihbey says that X’s Community Notes model suffers from many of the same issues third party fact-checkers do, including problems of scale, timeliness and user consensus across party lines.
“The big rubric here is counterspeech, which is: how do you create or present additional information — content labels, for example — or different points of view around a particular claim,” Wihbey says.
“I would prefer to see mixed regimes, where you have both professionals and ordinary users, but that you also enlist experts who bring real knowledge,” he adds.
Another possibility, Wihbey says, is that Meta could simultaneously be working on AI-based solutions that may serve to fill in the gaps.
“The real tale will be the very powerful, invisible ways in which they use AI tools to try to keep a lid on problems, but also still allow for expression to flourish on the platform,” Wihbey says. “That’s a tricky balance.”