We know how to make AI work. How can we make it work fairly?

David Liu writes on his blackboard
“Instead of checking after the fact, what we really want to do is bake fairness into the design process itself,” says David Liu, a doctoral student at Northeastern who recently received a National Science Foundation grant to research the problems of fairness and reproducibility in artificial intelligence. Photo by Ruby Wallau/Northeastern University

In computer science, the idea of fairness is usually an afterthought as long-term moral questions take a backseat to more pressing technical issues. The first priority is to make programs that work. Are they ethical though? The general response: We’ll find out later. 

“The most common approach to fairness right now is to take an existing model and check whether it’s fair,” says David Liu, a doctoral student at Northeastern, who recently received a National Science Foundation grant to research fairness and reproducibility problems in artificial intelligence. 

Portrait of David Liu

Liu says data isn’t the only factor that can make a program unfair. At some point a human has to decide how to use the information presented, which creates opportunities to introduce biases. Photo by Ruby Wallau/Northeastern University

“Instead of checking after the fact, what we really want to do is bake fairness into the design process itself,” he says. 

In recent years, racial biases against people of color have been exposed in artificial intelligence models that review housing applications and determine bail sentences, just to name a few examples of discriminatory models that have life-changing consequences. 

“Oftentimes these programs are more accurate for one demographic,” Liu says. “It’s not surprising that the model works better for white men if that’s what it’s trained on.” In this case, using data that more accurately represents the population could be one way to incorporate fairness into the program’s design. 

But Liu says data isn’t the only factor that can make a program unfair. For example, even if the database that an algorithm uses is completely representative and neutral, at some point a human has to decide how to use the information presented, which creates opportunities to introduce biases. “We have to acknowledge that it’s not just the data. It’s the whole process,” he says.

Since starting his research at Northeastern earlier this year, Liu has been studying philosopher John Rawls’ theory of justice, which states that members of a society should have the maximum amount of freedom, limited only if the liberties of one person detract from those of another. 

“It’s a topic that other people have thought about way longer than I have,” he says. “I’ve been working with a philosopher at Northeastern who has opened my eyes to other definitions of fairness. Going forward, one of my main interests is broadening my definition of fairness and tailoring certain definitions to specific domains.” 

In addition to studying fairness, Liu plans to examine another problem that plagues many fields of science, including artificial intelligence: the inability to reproduce an experiment. 

Successfully replicating an experiment is an essential step in determining whether the results of the study are accurate. But not all artificial intelligence experiments can be replicated to the same effect, and not all algorithms can be generalized to populations beyond those of the original study, Liu explains. 

“Similar to fairness, reproducibility is something people worry about at the end of the process,” Liu says. “But it would be much more effective if it was prioritized earlier on.” This would not only enable other scientists to check the accuracy of the results, but it could also help future researchers broaden the scope of the original experiment to include new populations of people, Liu says. 

Liu is excited for his time at Northeastern, during which he hopes to bridge his cross disciplinary interests. “The interdisciplinary environment lets me focus on the technical side of computer science while maintaining the overall motive, which is social sciences,” he says. “I don’t want to just combine these topics, but really see how they inform each other.” 

For media inquiries, please contact media@northeastern.edu.