Researchers from Northeastern, MIT, Facebook, Google, Microsoft make a case for the importance of the emerging field of machine behavior by Molly Callahan April 24, 2019 Share Mastodon Facebook LinkedIn Twitter Illustration by Hannah Moore/Northeastern University Artificial intelligence and machine learning models can be found in almost every aspect of modern life. News-ranking algorithms determine which information we see online, compatibility algorithms influence the people we date, and ride-hailing algorithms affect the way we travel. Despite the pervasiveness of these life-changing algorithms, we don’t have a universal understanding of how they work or how they’re shaping our world. So, a team of researchers—including two Northeastern University professors—says that it’s time to study artificially intelligent machines the way we study humans. David Lazer one of the authors of the paper, is University Distinguished Professor of political science and computer and information sciences at Northeastern.Photo by Adam Glanzman/Northeastern University A new paper published Wednesday in the scientific journal Nature calls upon scientists from across various disciplines to unite in studying machine behavior. For years, scientists have studied the function, causes, development, and evolutionary history of human behavior. With intelligent machines doing more and more of our collective ‘thinking,’ the same interdisciplinary approach needs to be applied to understanding machine behavior, the authors say. “We’re seeing an emergence of machines as agents in human society; these are social machines that are making decisions that have real value implications in society,” says David Lazer, who is one of the authors of the paper, as well as University Distinguished Professor of Political Science and Computer and Information Sciences at Northeastern. Take, for example, your search engine. If you’re looking for “cures for cancer,” you’ll get thousands of results in a matter of seconds. Some of those results are more scientifically sound than others. “There’s a subgroup of the internet that believes ingesting pulverized peach pits [or apricot kernels] cures cancer,” Lazer says. Alan Mislove, an associate professor in the Khoury College of Computer Sciences at Northeastern, is one of the authors of the paper. Photo by Matthew Modoono/Northeastern University Search engines use algorithms to determine the relevance of the information they serve up to users. But often, these algorithms are considered proprietary information, so it’s nearly impossible to understand exactly how our machines determine what’s a relevant result and what isn’t. Let’s say the machine’s decision-making is based strictly upon popularity—so, the more often people click on a link, the more prominently it will appear on the list of results. Now, imagine that someone configures a network of computer servers to “click” on a link that says ground-up peach pits cure cancer. This would inauthentically raise the relevance of the link, Lazer says, and cause more humans searching for cancer cures to stumble upon the idea that all it takes is peach pits. How to create unbiased algorithms in a biased society read more “You could play the algorithm so that it serves up peach pits more often,” he says. “So the question becomes: Are people getting bad health information? And what can a place like Google do about it so that people aren’t grinding up peach pits instead of seeing an oncologist?” Without knowing how Google’s algorithms work and how they evolve over time, it’s impossible to understand how they’re affecting the real behavior of real humans. This is just one example. The researchers include a handful of others—the way courts use algorithms to influence bail, sentencing, and parole decisions, or the way banks use algorithms to make decisions about loans, among them. In almost every case, these algorithms are judged to be successful or not based on whether they fulfilled their intended function, the researchers write. Did the algorithm in the search engine produce results that were related to “cancer cures”? Did the algorithm that assesses criminal risk produce a sentence that is within the law? What hasn’t been examined as closely or as systemically, they say, is how these algorithms work. How do they evolve with use? How do machines develop a specific behavior? How do algorithms function within a specific social or cultural environment? This algorithm can predict who will win a debate. It might also help save democracy. read more These are the kinds of questions within the animal world that animal and human behavior scientists have been studying for decades, the researchers say. And now it’s time to apply the same rigorous, unified study to machines. Lazer and Alan Mislove, who is an associate professor in the Khoury College of Computer Sciences at Northeastern, were among more than two dozen researchers from higher education and technology institutions to author the paper. Those institutions include Microsoft, Google, Facebook, the Massachusetts Institute of Technology, Harvard, Yale, and the Max Planck Institute. Researchers from MIT led the effort. The researchers write that it will take people from a host of scientific disciplines to study the way machines behave in the real world. The process of understanding how online dating algorithms are changing the societal institution of marriage, or determining whether our interaction with artificial intelligence affects our human development, will require more than just the mathematicians and engineers who built those algorithms, they say. “We have a collective notion that there’s a new field emerging,” Lazer says. For media inquiries, please contact Shannon Nargi at firstname.lastname@example.org or 617-373-5718.