Featured
To be an internet user in 2024 is like being a hamster running on a wheel. The modern web is largely composed of consumer services that use artificial intelligence-based algorithms to hook people to stay logged on — for better and for worse.
“You as a user make choices,” says Tina Eliassi-Rad, a computer sciences professor at Northeastern University and a core faculty member of the Northeastern Network Science Institute and the Institute for Experiential AI.
“You watch certain things. You buy certain things. You’re producing training data for these AI algorithms, specifically recommendation systems — think Amazon, think Netflix, think Match.com”
“These AI algorithms produce suggestions to you, those suggestions supposedly influence your choices,” she adds. “Through that, you’re producing more training data for the algorithm, and round and round we go.”
In essence, the web is made up of a series of human and AI feedback loops correlated with user behavior, Eliassi-Rad explains.
Eliassi-Rad is one of several Northeastern researchers who have proposed a new area of study they are calling “Human AI Coevolution” to better understand and analyze these feedback loops. Other researchers on the project include Northeastern professors Ricardo Baeza-Yates, Albert-László Barabási and Alessandro Vespignani.
For this research project, the team analyzed AI algorithms used in a variety of services, including online retailers, social media sites, navigation services and AI-based text and image generation clients.
Human-AI interactions are not isolated exchanges, Barabási says.
“They form an intricate network of feedback loops,” he says. “Each click, each choice, each recommendation doesn’t just affect the individual — it ripples across the network, influencing the behavior of others and shaping the evolution of both human society and AI systems.
“Understanding this dynamic at the interface of network science and AI research is crucial if we are to harness these systems for societal benefit rather than allowing them to amplify unintended consequences.”
Baeza-Yates is quick to note that this is not designed to be a discussion around “biological evolution,” but rather “about how human behavior and human society is impacted by technology.”
“In this work, we emphasize the urgent need to investigate how humans and AI algorithms continuously influence each other, creating a potentially endless feedback loop that leads to complex and often unintended systemic outcomes,” adds Vespignani.
“This calls for the establishment of a new field of study at the intersection of AI and complexity science, dedicated to understanding, characterizing, and potentially anticipating the large-scale societal impacts of AI deployment,” he notes.
Vespigani explains that the Human AI coevolution framework “puts at the center the continuous and dynamic interaction between humans and AI systems, where each influences the other’s evolution.”
He highlights that these feedback loops have “far-reaching social implications.”
“These include shaping public opinion, influencing consumer behavior, and even redefining social norms,” he says.
“By providing a structured approach to analyzing these complexities, the framework allows us to systematically identify potential risks, such as polarization or bias, and develop strategies to design AI systems that promote fairness, inclusivity and societal well-being.”