As part of a new set of policies designed to cut down on anti-vaccine content and health misinformation, YouTube is starting to ban any videos that claim commonly used vaccines approved by health authorities are ineffective or dangerous.
The video sharing platform, and others, including Facebook and Twitter, had already banned misinformation related to the COVID-19 vaccines. This takes the crackdown one step further, with YouTube taking down anti-vaccine posts as well as the accounts of people who spread false information about other vaccines.
Critics and people who have propagated vaccine misinformation on the social media platform immediately decried the move as a violation of their First Amendment protection of free speech—a fundamental misunderstanding of free speech protections, says Claudia Haupt, associate professor of law and political science at Northeastern.
The First Amendment, which protects free speech in the U.S., applies to government censorship of protected speech, but not to private companies such as YouTube, Facebook, or Twitter.
“But just because the First Amendment doesn’t apply here, doesn’t mean that there aren’t tricky questions” for platforms deciding which posts stay and which are taken down, Haupt says.
Does this move make sense, as a way to curb vaccine misinformation?
If I understand it correctly, Facebook and Twitter had already banned vaccine misinformation, and YouTube was the last large platform to do so. It’s not surprising—if you think about the way that content gets shared across those platforms, it doesn’t really help just to target one of them. If you’re concerned about misinformation, you would want to look at the entire ecosystem of all social media platforms.
Do people who share anti-vaccine rhetoric on social media platforms have a First Amendment right to do so?
We have to start from the premise that no one has a First Amendment right to post on those platforms. There’s no First Amendment right to be on the platform, and the companies aren’t required to engage in content-neutral moderation decisions; they can exclude certain viewpoints.
But just because the First Amendment doesn’t apply here, doesn’t mean that there aren’t tricky questions: Even if you’re a private platform who can moderate independent of the First Amendment, you have to make a decision about what are your guiding principles for including or excluding certain messages. So, for example, you could say, “I’m going with the medical consensus around vaccines, and I’m going to exclude all of the messages about vaccines that directly contradict all of the medical community’s understanding of how vaccines work.”
You can see that in the link people have made about the childhood measles, mumps, and rubella vaccine and autism—it’s an idea that’s been refuted, it’s just inaccurate as a matter of science. So, you could exclude all the statements that pertain to that, and set the bar according to what the medical community says. You could still, though, decide to permit people to share stories about bad things that have happened to them, because they’re not making a medical claim or giving advice, they’re just telling a story about what happened in their lives. There’s no direct link between what they say and telling people to do that.
But again, all this is independent of the First Amendment because these are private companies.
In that case, how do companies decide what’s in and what’s out?
In this context with vaccines, on the one hand you have expertise in a medical community that we recognize as the authority on that question, and on the other, we know that there can be huge amounts of harm that can be conflicted by bad information or bad advice.
You could imagine closer cases where it’s harder to decide what the standard is, but with medical information, we have a scientific standard to go by.
But there are also instances where we have contested science. In the beginning of the pandemic, we had the problem that giving advice was really hard because the medical community was figuring things out as the virus spread. There, it would be really difficult—and really problematic—for private companies to decide that some things are good advice and some things are bad advice.
The platform has to pick whose expertise, whose assessment to follow. And this comes up in malpractice all the time: If you go to the doctor and get bad advice, the standard that it’s judged by is the community of medical professionals. I think it makes sense to also use that as a baseline for speech if it’s framed as giving advice.
So often, as we can see here, these decisions boil down into a black-and-white conversation: Either “I have free speech” or “I’m being censored.” Is there a better conversation we could be having?
With these platforms, “I have a right to say something” is the reflexive cultural posture we have because we’re so used to talking about rights and the First Amendment. But legally, that doesn’t even apply in this space.
Generally, one way I think we should think about it is to weigh speech as one variable, harm as another, and expertise as a third. So, it’s not just my right to speak against your right to speak, it’s more about what does the speech do? What’s the level of harm it may cause? Is there something in the content that can be measured in terms of expertise?