Twitter and Facebook have the right to ban Trump’s accounts. But that won’t stop the violent rhetoric.

Photo by Ruby Wallau/Northeastern University

After a series of increasingly incendiary posts late last week, President Donald Trump has been permanently suspended from Twitter, due to the risk of further incitement to violence, the company said in a statement on Friday. The move is one of several similar steps taken by social media organizations after a mob of the President’s supporters stormed the U.S. Capitol—Trump was suspended indefinitely from Facebook, Instagram, Snapchat, Twitch, and other social platforms last week as well. 

Major tech companies also took action against Parler, a fast-growing social media app that had grown popular with Trump die-hards after Facebook and Twitter started cracking down on misinformation and violence. Apple and Google removed the app from their stores, and Amazon made plans to suspend  Parler from its web-hosting service.

As private companies, Facebook and Twitter are well within their rights to suspend users who violate their terms of service, say two Northeastern law professors. But, says Woodrow Hartzog, professor of law and computer science, as social platforms become further enmeshed in people’s daily lives, it’s time to “rethink our entire regulatory approach to tech companies.”

There’s relatively little government oversight of social media platforms, and even fewer regulations that have any sort of teeth, Hartzog says.

Left to right: Woodrow Hartzog, professor with joint appointments in the School of Law and the College of Computer and Information Science and Claudia Haupt, associate professor of Law and Political Science. Photo by Matthew Modoono/Northeastern University and Photo courtesy Claudia Haupt

The First Amendment, which protects free speech in the U.S., applies to government censorship of protected speech, but not to private companies such as Twitter, Facebook, and Twitch, says Claudia Haupt, associate professor of law and political science. Further, Section 230 of the Communications Decency Act protects websites from lawsuits if a user posts something illegal.

Because they’re based in the U.S., many of these companies have a “strong cultural notion of free speech” that’s tied to the First Amendment, Haupt says. But they’re free to—and often do—moderate what their users post based largely on their own terms of service.

Twitter, for example, removed three of Trump’s posts for “severe violations” of its “Civil Integrity policy” on Wednesday, after he tweeted that the people who stormed the Capitol were “patriots” and added “We love you.”

Facebook’s indefinite ban stems from the president’s “use of our platform to incite violent insurrection against a democratically elected government,” CEO Mark Zuckerberg wrote.

When it comes to controlling dangerous rhetoric on these sites, the moves are too little, too late, Hartzog says.

“The horse has already left the stable, and the tech companies are now attempting to slam the door on an empty barn,” he says. 

And it’s not just the President—social media platforms such as Facebook, Twitter, and TikTok are breeding grounds for misinformation about COVID-19, the election, and politics in general, researchers at Northeastern have found.

To combat this, some sites have begun labeling posts that they deem misleading or that lack credible sourcing. Researchers in the university’s Ethics Institute, including John Wihbey, assistant professor of journalism and media innovation, are studying the efficacy of such labels.

Still, the U.S. regulatory system needs “structural change,” rather than isolated attempts to find protections based on individual events, Hartzog says.

“We need a lot more scrutiny toward the systems that are driving the kinds of toxic behavior on social media companies, specifically micro-targeting and engagement metrics,” he says. Platforms designed to increase engagement at all costs end up encouraging the kind of extreme content that gets a lot of clicks.

“A lot of why Twitter and Facebook are hellscapes is because they encourage the kinds of behavior that is ripping our social fabric apart,” Hartzog says.

He argues that social media companies need to take a holistic approach to content moderation—a user’s posts may not individually violate a platform’s content policies, but would, if taken as a whole. 

Hartzog adds that adequate moderation may require lawmakers to hold social media companies to the same standards to which they hold traditional media outlets.

“In a world where we have these intermediate players of social media, it’s not just people and the government anymore, but rather these massive platforms whose decisions are so critical to creating a virtuous or toxic environment.”

For media inquiries, please contact Marirose Sartoretto at m.sartoretto@northeastern.edu or 617-373-5718.