Do Google and Facebook have an obligation to quash misinformation?

“It’s simply a mistake to think that algorithms can’t and don’t also result in censorship, bias, and misinformation,” says John Basl, assistant professor of philosophy. Photo by Adam Glanzman/Northeastern University

Earlier this week, two articles from the online forum 4Chan appeared briefly in Google’s “Top Stories” section after a search for the wrongly-named Las Vegas shooter. The articles, which appeared as two of the three top stories on Google’s search page, were the result of a deep-web conspiracy theory that had taken hold of several 4Chan message boards.

Google apologized for amplifying the misinformation, promising to make “algorithmic improvements” to its news filters in order to avoid a similar situation in the future.

The incident came on the heels of a similar issue on Facebook. On Monday, the social media site’s “Trending Topics” page was returning articles from the Kremlin-sponsored media outlet Sputnik News. As with Google, Facebook soon apologized for the issue and took down the pages.

In an age in which people increasingly turn first to giant media sites for information, especially after a violent incident such as the Las Vegas shooting, do companies have an ethical obligation to vet the sources that appear on their sites? We asked John Basl, assistant professor of philosophy at Northeastern.

Do big media companies like Google and Facebook have an ethical duty to root out misinformation on their sites?

I think they do have an obligation to do something about misinformation, but I’m not sure they should root it out, if that means removing access to it or hiding results that are judged to be misinformation. That’s not only a difficult technical task—how do we ensure, for example, that satirical pieces are not hidden or removed as misinformation—but, depending on how it is done, might compromise important values such as transparency and openness. I think a better solution is to identify or flag the reliability of questionable search results or links.

Another way that media companies might meet their obligations to minimize the harmful effects of misinformation while retaining the values of transparency and openness would be to implement some way to train users of their software to spot misinformation, and recognize or evaluate sources.  It’s a solution, at least potentially, that combats misinformation without ceding control of what counts as misinformation to media companies.

How do events like the Las Vegas shooting affect this duty?

There are certainly circumstances where it seems media companies would be justified in limiting access to certain information for some period of time. This is especially true in cases where misinformation could be especially dangerous, where it might lead to misidentifying an innocent person as a suspect and put them in danger. However, this must be balanced against the fact that social media can provide significant benefits during these incidents, and the benefits might be significantly diminished if media companies begin censoring information during emergencies.

It’s simply a mistake to think that algorithms can’t and don’t also result in censorship, bias, and misinformation.”

John Basl
Assistant professor of philosophy

While traditional news media might be capable of more carefully sourcing material and avoiding misinformation, they are not nearly as useful for helping to provide useful resources to those in need or the loved ones of those in need. Furthermore, any tools developed for temporary censorship could have a dual-use and make it easier to censor social media or search results more generally. In general, I think this favors the status quo. On balance, it would be better to implement an approach to handling and avoiding misinformation that isn’t focused on censorship.

Algorithms are supposed to be inherently unbiased, so how much should humans interfere with the results served by algorithms?

It’s important that there be pretty regular human oversight and, potentially, interference with algorithmic results. Firstly, bias sneaks into algorithms in all sorts of ways. If a search algorithm makes a prediction about which results to serve up when a search is performed for “recent terrorist attacks,” the results will be influenced by the events we classify as being “terrorist attacks” and how we classify acts of violence might be the result of many different biases we have. It is very difficult to think of algorithms as unbiased even if they aren’t designed to look for or perpetuate bias. It’s simply a mistake to think that algorithms can’t and don’t also result in censorship, bias, and misinformation.

Secondly, while humans are often biased, they can also choose to correct for these biases. If a human recognizes they are likely to classify an action as a “terrorist attack” primarily on the basis of the religious affiliation of the attacker, they can take steps to address that.

It’s hard to say exactly what the form of human interference or oversight should take. This requires balancing concerns about human bias, transparency, censorship, and other issues against the concerns about biases being hidden within algorithms and those costs. But I think there are ways to develop policies for human intervention that achieve a good balance of these concerns.