In order to stop the spread of fake news for the 2020 U.S. elections, social media companies may need to limit how frequently its users are allowed to post, said David Lazer, a Distinguished Professor of Political Science and Computer and Information Science at Northeastern, who recently published a new report on fake news during the 2016 election.
The study, which examined how people shared fake news and how often people were exposed to it on Twitter, found that a massive amount of fake news was produced and consumed by a very small amount of users.
0.1 percent of users were responsible for sharing
of the fake news on Twitter in 2016
There is not widespread agreement on what constitutes “fake news.”
Lazer defines it as a “subgenre of misinformation,” calling it “information regarding the state of the world that’s constructed with disregard of the facts and invokes the symbols of existing truth-tellers. It misinforms by appealing to the very worst of human nature, and undermines truth-tellers at the same time.”
The researchers also used his definition of fake news sources to guide the study. Lazer defines fake news sources as outlets that “lack the news media’s editorial norms and processes for ensuring the accuracy and credibility of information.” They include among these sources such sites as truthfeed.com, dailycaller.com, and gatewaypundit.com.
The researchers found that 5 percent of political news generated in 2016 came from fake news sources. On Twitter, 0.1 percent of users shared 80 percent of that fake news. And they shared it with a very concentrated group of users. About 1 percent of users were exposed to 80 percent of the fake news shared on Twitter, the study found.
5 percent of political news generated came from fake news sources
“You have a really small group of people who just cranked out a ton of stuff, and a small number of people who got exposed to a ton of fake news,” Lazer said. The report calls these groups of people “supersharers” and “superconsumers.”
Lazer and his colleagues found that on a given day leading up to the 2016 election, the average “supersharer” of fake news tweeted 71 times and shared roughly eight political websites, of which roughly two were from fake news sources. For comparison, the average regular user tweeted just once per day.
Similarly, the average “superconsumer” of fake news was exposed to almost 4,700 political websites per day, compared to only 49 for the average regular user.
Kenny Joseph, an assistant professor of computer science at the University at Buffalo who co-authored the study, said he was surprised “at how far a small number of people went to promote fake news.”
“We suspect some people used automation tools typically reserved for large organizations in order to share large volumes of fake news,” said Joseph, who completed his postdoctoral degree at Northeastern. “While it is necessary to study how foreign and state actors’ influenced the spread of fake news, it is equally as important to understand why these seemingly ordinary people decided to heavily promote and embed themselves within this content.”
To track the prevalence of fake news among people on Twitter, Lazer and his colleagues matched U.S. voter registration records to Twitter accounts. Using this method, the researchers could sift out bot accounts and focus solely on human users. After an additional vetting process, the researchers were left with more than 16,000 accounts linked to real, voting U.S. residents that they used for the study.
The researchers found that these “supersharers” of fake news sources were people from across the country but “disproportionately aged 50 or above, Republican, and female,” the report reads.
Data visualization by Hannah Moore/Northeastern University.
In general, “superconsumers,” or the people who had high proportions of information from fake news sources in their newsfeeds, were “more likely to be right-leaning,” the report reads. Lazer’s research doesn’t specifically delve into why right-leaning users tend to share and consume more fake news than their left-leaning counterparts.
Since his research shows that it’s a small group of people who “cranked out a ton” of fake news leading up to the 2016 election, Lazer suggested that one way to curb the spread of fake news in 2020 is to put a limit on the amount of times a user can post in a given amount of time.
“We’ve found that sharing tons of content is correlated with sharing tons of garbage; this super-sharing disproportionately affects fake news and misinformation,” he said. “If you put a speed limit on how often you can share, it would dramatically cut down the spread of fake news.”
Other tools, like muting and blocking, are already available to Twitter users, Lazer said. These tools can prevent fake news from ever crossing a user’s screen by cutting off the account that’s sharing it in the first place.
These solutions will merely stem the flow of fake news, though, Lazer said. He doubts that there’s a way to eliminate fake news entirely.
“It’s not a problem that’s going to go away,” he said. “It may be a problem that can be managed, and I hope it is. But as long as there are people who believe crazy stuff, they’re going to keep sharing that stuff.”