New research reveals algorithms’ hidden political power
New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content.

How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about?
Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms.
New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still within the platform they’re used to.
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on polarization.
The results held regardless of the party with which the user self-identified.

Hacking the system
Jia, an assistant professor in the School of Journalism and Media Innovation, jointly appointed in the Khoury College of Computer Sciences, says the experiment works rather simply. The browser extension uses a large language model (LLM) to classify and rerank posts that appear on the subjects’ X feeds according to, as they call it, “antidemocratic attitudes and partisan animosity.”
Over 1,200 subjects were divided into two groups: one that had increased exposure to antidemocratic attitudes and partisan animosity, AAPA for short, while the other group had decreased exposure to it. “When you’re the user of our study, basically you can’t notice anything in terms of the reranking,” Jia says.
For the group with increased exposure, more posts with more vitriolic AAPA content appear higher on their X feed, with the reverse being true for the decreased exposure group.
How antidemocratic attitudes and partisan animosity impact feeling toward the opposing party
Note: Average change
Source: Jia, Chenyan, et al. 2025. “Reranking partisan animosity in algorithmic social media feeds alters affective polarization.” Science.
“We didn’t remove anything,” Jia notes. “That’s something that we would like to be really cautious about, because there might be censorship concerns.” Users’ feeds were still constituted by the people they followed and all the same posts were available for viewing, it was simply the order in which they appeared that the LLM controlled.
Editor’s Picks
The experiment was conducted in the United States between July and August 2024. It was a tumultuous time, Jia recalls, including President Joe Biden’s withdrawal from the presidential race and the attempted assassination of then-presidential candidate Donald Trump in Pennsylvania. Ironically, the diversity of political events helped researchers better understand the impact of partisan polarization and hostility, she continues.
Users’ feelings toward the party they didn’t identify with, their “out-party,” were gauged with a “very well-established scale in political science,” Jia says. Their attitudes toward the opposing party were ranked either warmer (closer to 100, more positively) or cooler (closer to zero, more negatively). A score of 50 would suggest someone who feels truly neutral about their out-party.
No matter whether the subject was Republican or Democrat, if they were in the reduced AAPA group, their feelings toward the out-party group warmed by about two points. If they were in the increased AAPA group — that is, they were subject to more partisan language — their feeling for the out-party group cooled by about two points.
While a two-point shift may not sound like much, Jia says that this is actually a very pronounced change, “comparable in size to three years of change in the United States.”
But Jia’s experiment lasted a single week.
Public knowledge
Jia sees two ways that she hopes this research could have a real-world impact. One is that it will help other scientists study the effect that social media platforms have on the public, no longer requiring the cooperation of corporations with their own vested interests.
Jia and her team have made the programming behind the browser extension free and open source, available to any other researcher who might want to conduct this kind of research.
Second, Jia hopes that it could provide the public with more knowledge about how they’re being influenced at every turn by their social media algorithms and a greater sense of agency in having some control over their own feed.
“In our daily life, social media has always been individually focused,” Jia says. “It’s very oriented by the engagement metrics, how many reposts you get, how many likes you get, how many comments.” This research, she hopes, marks a step toward something new, a way to use social media algorithms for real, “positive societal impact, like reducing affective polarization.”










