Northeastern University journalism student earns national attention for revealing media bias against female presidential candidates

From top left: Democrats Elizabeth Warren, Cory Booker, Bernie Sanders, Amy Kloubuchar (AP photos)

Don’t get him wrong. Alex Frandsen is grateful for the attention to his work, but it’s also making him slightly uncomfortable.

CNN ran a segment on the political research by Frandsen, a Northeastern senior. The Washington Post and other outlets cited his work, and it has sparked a number of spirited exchanges  on social media.

Alex Frandsen. Photo by Matthew Modoono/Northeastern University

Frandsen and his editor, Northeastern journalism professor Aleszu Bajak, discovered in February that the women who had entered the 2020 presidential race were being described in the media more negatively than the men.

An updated review of the five most-read news sites—The Washington Post, The New York Times, Huffington Post, CNN, and Fox News—has continued to show that Democrats Bernie Sanders, Cory Booker, and Beto O’Rourke are being portrayed more positively than female rivals Elizabeth Warren, Amy Klobuchar, Kamala Harris, and Kirsten Gillibrand. The results were posted by Bajak on his digital storytelling site, Storybench, which is supported by Northeastern’s School of Journalism.

 

 

Frandsen has been surprised and gratified by the response. But he is also concerned by the reactions to data that he cautions are the preliminary result of an ongoing survey.

“I understand why people pulled it out as such a conclusive data point, that women are being described more negatively in the media,” Frandsen says. “I think we have enough past evidence and anecdotal evidence to say that’s probably true.”

At this early stage of the campaign, Frandsen says, his nascent research isn’t meant to provide final answers. Instead, he hopes that surveys like his will be viewed as a starting point that will further awareness of gender bias in the months to come.

“Right now, this is a good start to the discussion,” says Frandsen, who is planning another research update before graduation. “It’s something to keep an eye on. But it’s not a conclusion.”

Frandsen, who will be graduating in journalism in May, had been searching for a project involving politics when Bajak helped him devise a method to rate the “sentiment” of media coverage in the early months of the presidential campaign. In the 2016 election, Hillary Clinton and many of her supporters argued that she was depicted more harshly than Donald Trump. Now, with an unprecedented number of female contenders entering the crowded Democratic primary, there was an opportunity to test media bias among a number of candidates.

Because Frandsen was working alone (with Bajak’s oversight), he was forced to limit his initial survey to 10 news articles per candidate. Each article measured 500 words or more and was selected by Frandsen.

“I would go to these new sites and find articles that were solely about that candidate,” Frandsen says. “If I had more than two stories to choose from, I tried to just randomly pick which ones to use, without paying attention to the headlines, the body content, or anything.”

As much as Frandsen has appreciated the response to his project, he is upfront about its shortcomings.

“Ideally, what I would like to do is collect all of the articles that were ever written about them, and figure out some way to randomize them with the computer. But given the limited hours I’ve had to work on it, I had to limit my sample size.”

With Bajak’s help, Frandsen used a variety of software tools to search for positive and negative (or critical) words within each article.

In a March 29 update, the database for each of the original half-dozen Democrats was doubled to 20 articles each. (O’Rourke’s base was limited to 10 articles, because he didn’t announce he was running until March 14.)

 

The bias was still evident. The top terms applied to Warren, the Massachusetts senator, were test, DNA, native, tribal, and Sioux—all related to the controversy surrounding her claim of Native American ancestry, which she attempted to prove by undergoing a genealogical test last year.

Aleszu Bakaj manages the School of Journalism’s Media Innovation and Media Advocacy graduate programs and teaches courses in journalism, coding, and data visualization. Photo by Adam Glanzman/Northeastern University

Gillibrand, the New York senator, has been linked to controversies involving sexual harassment (the top term associated with her, Franken, refers to her call for Minnesota Senator Al Franken’s resignation before allegations against him could be vetted) and her relaxed view on gun controls early in her political career (gun being the fourth term in her coverage).

By comparison, the top terms applied to Booker, the New Jersey senator, are Newark (where he served as mayor), schools, and charter (which reflect his proposals for education reform). Sanders, the Vermont senator who ranks first in the sentiment rankings of Frandsen’s study, is defined mainly by the terms socialist (which is not necessarily a negative term, as many Democratic voters view this as his strength) and tuition (based on his mission to provide free college for all).

 

To strengthen the credibility of the findings, Bajak says he would like to increase the number of articles, survey more media outlets, and come up with ways to weed out the false-positives and false-negatives (in which terms are applied to someone else apart from the subject candidate). Another goal would be to include real-time analysis of social and broadcast media to bring more credibility to the study.

“We are dipping our toes into it,” says Bajak, who manages the School of Journalism’s Media Innovation and Media Advocacy graduate programs and teaches courses in journalism, coding, and data visualization. “Most of the comments we’ve been getting on Twitter and Facebook are about how this [appearance of bias] isn’t surprising at all.”

But final conclusions should not be drawn from their sentiment scoring model, especially because it is drawn from a relatively small sample of articles. The real question, as Frandsen and Bajak write in their latest installment on Storybench, is: “What exactly is causing this apparent disparity in media portrayal?

“We’re not saying this is the be-all and end-all,” Bajak says. “We have to have our skeptical hats on about what this limited dataset is telling us. The way that this analysis is done is far from perfect.”

Several students and advisors—led by Northeastern journalism professor Meg Heckman—have shown interest in deepening the Storybench survey as Frandsen prepares to move on to a year-long fellowship in Washington D.C. with the nonpartisan Friends Committee on National Legislation, which lobbies for issues in the public interest.