Are TikTok, X and other social media platforms good or bad sources for news on the Israel and Hamas war?

Person using their phone to look at news.
Footage of the Israeli siege on Gaza is being livestreamed on TikTok at a time when many journalists can’t enter the region. Photo by Matthew Modoono/Northeastern University

This report is part of ongoing coverage of the Israel-Hamas war. Visit our dedicated page for more on this topic.

For the last few days, one of the only ways people have been able to see the Israeli siege of Gaza in the aftermath of Hamas’ attack on Saturday has been on TikTok.

At a time when journalists are struggling to enter the region, the Daily Mail and others have been livestreaming the siege on the social media platform to thousands of people, providing real-time access to a rapidly escalating war. 

Since Hamas launched its multi-front attack over the weekend, the death toll in Israel has risen to 1,200, with more than 2,700 injured. Israel responded with airstrikes and by bombarding Gaza, killing at least 1,055 people and injuring more than 5,000, according to Palestinian officials.

With casualties rising on both sides and Israeli military efforts ramping up, the need for accurate information among journalists and citizens is higher than ever. But do livestreams on TikTok and eyewitness videos on X, the platform formerly known as Twitter, actually help or harm?

John Wihbey, an associate professor of journalism at Northeastern University, says social media still has incredible potential to raise awareness of and provide visibility into situations that traditional media can’t access. But he also says the rose-tinted glasses people once saw these platforms through are not the best fit in 2023.

“I certainly think more transparency and more visibility into events is a good thing, but those same technologies are used for propaganda purposes as well,” Wihbey says.

At the same time TikTok is giving people unprecedented access to an active battleground, misinformation and violent videos spread rampantly and unfettered on X, where Elon Musk has stripped away many of the moderation tools and policies previously used on the platform. Even without all this misinformation, social media platforms just aren’t designed to benefit coverage and conversation around events like the Israel-Hamas war.

“In terms of this kind of event where you have extremely high stakes and very sensitive issues playing out in real time, the medium of social media does no favors to the issues because deliberation, rationality, reason, verification of facts –– all these things we might value in an ideal sense –– are just completely undermined by the speed, velocity, virality and algorithmic curation of content and narrative,” Wihbey says.

Wihbey, who also serves as lead investigator for the ethics of content moderation at Northeastern’s Ethics Institute, describes a deadly feedback loop of misinformation created by social media. 

“Algorithmically curated online discourse does shape news coverage, and then there’s a dynamic feedback loop there,” Wihbey says. “News organizations then give validity to certain kinds of topics or narratives and then people talk about them all the more.”

It all leads to “a very confusing kaleidoscope of content online” that Wihbey worries could lead to consequences in the real world. People, who don’t know what information they can trust, become immobilized, while bad actors take advantage of the confusion and lax moderation to spread hate speech. Antisemitic comments proliferated so much on X after Saturday that Israeli Prime Minister Benjamin Netanyahu asked Musk to “roll back” hate speech on the platform. 

“People who might be on the fringes of such communities get pulled in, and the danger is you further radicalize certain kinds of groups and embolden them to take offline real-world acts,” Wihbey says.

The positive potential of social media is still there –– “it’s a dual use technology in a classic sense,” Wihbey says. However, Wihbey says it can only exist as long as companies are held accountable for their platforms. The European Union’s Digital Services Act has already paved the way for a potential future of governance. Wihbey says the U.S. desperately needs to catch up and “raise the stakes and sanctions” for companies to take even basic moderation measures.

He is also optimistic that generative AI and other automated tools, in tandem with more human moderators, will make it easier and more efficient for platforms to moderate high volumes of misinformation, propaganda and hate speech. However, the technology is not quite to the point where it can precisely detect misinformation with confidence, he says.

“This is where professional news media still play a very important role in trying to sift through the evidence of what can be verified on the ground,” Wihbey says.

Cody Mello-Klein is a Northeastern Global News reporter. Email him at c.mello-klein@northeastern.edu. Follow him on Twitter @Proelectioneer.