How should TikTok have handled the Osama bin Laden letter?

Historical photo of Osama Bin Laden at a news conference.
In this 1998 file photo made available on March 19, 2004, Osama bin Laden is seen at a news conference in Khost, Afghanistan. AP Photo/Mazhar Ali Khan, File

After Osama bin Laden’s “Letter to America” went viral — first on TikTok, then elsewhere — in recent days, a select group of lawmakers renewed calls for stricter scrutiny of the popular tech platform, citing fears that anti-Israel sentiment is spreading among its users.

The decades-old document, widely condemned as antisemitic, is apparently resonating with a largely Gen Z audience, who’ve read portions of the letter in videos as part of an effort to highlight the plight of the Palestinians in the ongoing Hamas-Israel war.

The sudden renewed interest in bin Laden’s letter, which has been publicly available since at least November 2002, among Gen Zers is not especially alarming, says John Wihbey, an associate professor of journalism and new media at Northeastern University. More pressing, Wihbey contends, is the need for greater transparency and data about how, and by what mechanisms, social media companies amplify, curate and rein in the content shared on their platforms. 

Wihbey and Claudia Haupt, professor of law and political science at Northeastern, described the letter as “lawful but awful” content — a term of art that refers to the lines drawn by free speech protections. 

“The First Amendment only applies to protect from the government interfering in speech,” Haupt says.

She says there are numerous examples of “lawful but awful” speech that the First Amendment can’t reach.

“If you are a user of a platform, and the platform moderates the content, that’s not subject to the First Amendment because there’s no involvement from the government,” Haupt says.

“The main point here is that the government may not choose which speech it prefers, or prohibit speech that it dislikes,” she added. 

Northeastern Global News, in your inbox.

Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.

But none of those constraints apply when it comes to platforms’ content moderation. 

Because of how broad the First Amendment is in scope, and the fact that it only protects against government censorship, it’s incumbent upon TikTok to develop robust community standards backed by an enforceable terms of service agreement with its users, the experts say. 

Even so, thinking about content moderation as a simple pass/fail test misses the point, Haupt says, adding that it’s about “calibration.” Wihbey agreed, adding that there are layers of nuance associated with successful content moderation policy. But above all, transparency is the key value, he says. 

Here is more from our conversation with Wihbey, who has written extensively about social media policy and content moderation.

What do you make of these calls for greater oversight of TikTok in this case? What current regulations are there?

For some time now there have been folks in Congress and elsewhere who have been advocating for subjecting TikTok to a lot more critical scrutiny — maybe even banning the app altogether. And we’ve discussed that before — the fact it has sort of gone nowhere, in part because it’s not clear such a ban would be legal, but also whether it would be sensible. Just given TikTok’s enormous importance in the media ecosystem, it would be extremely unwise to try to ban it outright under some flimsy pretext.

That said, I do think that a number of companies — TikTok included, but also Twitter/X, which kind of helped to inflame the whole bin Laden letter situation — should be releasing a lot more data about how they do business and how they curate and boost content, and what kinds of content is curated and boosted. And there has been some reporting that shows that companies like TikTok have some idiosyncratic or subjective ways of boosting content. In my view, there’s a good public interest rationale that they should be releasing a lot more data, they should be a lot more transparent about how they do that.

The United States remains an outlier in that we have no real mechanisms on the books to force any kind of transparency or to provide any policy guidance on how they should run these platforms. That stands in contrast to our allies in Europe who have passed the Digital Services Act, which is now on the books and is increasingly going to be influential and how technology platforms do business. It stands in contrast to India, which has a bunch of rules under their information technology laws that are quite influential, and then a whole bunch of other countries. So the United States continues to take a very laissez faire approach.

How do you see the reaction to this letter? Should the public be concerned about it spreading?

In terms of this particular letter, it was widely read a year after the 9/11 terrorist attacks. It was a piece of terrorist propaganda; it was widely discussed and dissected and was known to most news consumers. In some ways, it is a historical document.

There’s a risk here of confusing a couple different things. One is TikTok has Chinese ownership in that the parent company is Chinese, and we should bring lots of critical scrutiny to companies that are foreign-owned and have wide influence in the United States. That’s perfectly appropriate under U.S. law.

But we should do the same thing with our own U.S.-headquartered companies, such as X and YouTube and Instagram and the rest of them. But I think the fact that the Osama bin Laden letter blew up on TikTok does create an optics problem for TikTok — and it’s good that they sort of swung into action and tried to mitigate some of the harms there.

But at the same time, it’s not clear to me what the harms are, given that this letter is well known and has been part of public discourse for a long time. Of course, Hamas’s attack on Israel has created a heightened situation of tension and sensitivity around anything that is antisemitic in nature; and so what’s kind of interesting to me, from a company policy perspective, is the way in which context feeds into content moderation and decision-making. We tend to think of content moderation as having clear rules and strict enforcement. But it’s true that context matters, and the same piece of information in one context may have much greater weight or much less weight.

In your opinion, how should TikTok have handled this?

This has to be a judgment call on behalf of the human beings running TikTok. And I think it’s entirely appropriate for them to think about the various soft tools of moderation that can be used here, such as down-ranking or circuit breaking on virality — that kind of thing.

I’m not quite sure about an outright ban of a historical document. You could say, maybe we are in an extreme moment where because of the rise and dangers of anti-semitism, this particular incident requires special care. I think that’s a plausible argument and this is a private company, so it’s certainly within their discretion to act. And so I think that the appropriate tools here are the algorithmic tools: demotion, down-ranking, circuit-breaking of virality — sort of minimizing the reach of what some call ‘lawful but awful’ speech.

Tanner Stening is a Northeastern Global News reporter. Email him at t.stening@northeastern.edu. Follow him on X/Twitter @tstening90.