Northeastern launches AI Ethics Advisory Board to help chart a responsible future in artificial intelligence

woman wearing white shirt and headphones with illustration over her face
Illustration by Zach Christensen/Northeastern University

The world of artificial intelligence is expanding, and a group of AI experts at Northeastern wants to make sure it does so responsibly.

Self-driving cars are hitting the road and others’ cars. Meanwhile a facial recognition program led to the false arrest of a Black man in Detroit. Although AI has the potential to alter the way we interact with the world, it is a tool made by people and brings with it their biases and limited perspectives. But Cansu Canca, founder and director of the AI Ethics Lab, believes people are also the solution to many of the ethical barriers facing AI technology.

With the AI Ethics Advisory Board, Canca, co-chair of the board and AI ethics lead of the Institute for Experiential AI at Northeastern, and a group of more than 40 experts hope to chart a responsible future for AI.

“There are a lot of ethical questions that arise in developing and using AI systems, but also there are a lot of questions regarding how to answer those questions in a structured, organized manner,” Canca said. “Answering both of those questions requires experts, especially ethics experts and AI experts but also subject matter experts.”

The board is one of the first of its kind, and although it is housed in Northeastern, it is made up of multidisciplinary experts from inside and outside the university, with expertise ranging from philosophy to user interface design.

“The AI Ethics Advisory Board is meant to figure out: What is the right thing to do in developing or deploying AI systems?” Canca said. “This is the ethics question. But to answer it we need more than just AI and ethics knowledge.”

The board’s multidisciplinary approach also involves industry experts like Tamiko Eto, the research compliance, technology risk, privacy and IRB manager for healthcare provider Kaiser Permanente. Eto stressed that whether AI is utilized in healthcare or defense, the impacts need to be analyzed extensively.

“The use of AI-enabled tools in healthcare and beyond requires a deep understanding of the potential consequences,” Eto said. “Any implementation must be evaluated in the context of bias, privacy, fairness, diversity and a variety of other factors, with input from multiple groups with context-specific expertise.”

The AI Ethics Advisory Board will function as an external, objective consultant for companies that are grappling with AI ethical questions. When a company contacts the board with a request, it will determine the subject matter experts best suited to tackling that question. Those experts will form a smaller subcommittee that will be tasked with considering the question from all relevant perspectives and then resolving the case.

But the aim is not only to address the concerns of specific companies. Canca and the board members hope to answer broader questions about how AI can be implemented ethically in real-world settings.

“The mindset is for truly solving questions, not just ‘managing’ the question for the client but truly solving the question, and contributing to the progress of the practice” Canca said. “This is not a review board or a compliance board. Our approach is one, ‘Let’s figure the ethical issues and create better technologies. Let’s enhance the technology with all these multidisciplinary capabilities that we have, that we can bring on board.'”

It’s an approach that Ricardo Baeza-Yates, co-chair of the board, director of research for the Institute for Experiential AI and professor of practice in Khoury College of Computer Science, said is necessary in order to tackle the privacy and discrimination issues that are most commonly seen in AI use. Baeza-Yates said the latter is especially concerning, since it’s not always a simple technical fix.

“This sometimes comes from the data but also sometimes comes from the system,” Baeza-Yates said. “What you are trying to optimize can sometimes be the problem.”

Baeza-Yates points to facial recognition programs and e-commerce AI that have profiled people of color and reinforced pre-existing biases and forms of discrimination. But the most well-known ethical problem in current AI use is the self-driving car, which Baeza-Yates likened to the trolley problem, a famous philosophical thought experiment.

“We know that self-driving cars will kill less people [than human drivers], for sure,” Baeza-Yates said. “The problem is that we are saving a lot of people, but also we will kill some people who before were not in danger. Mostly, this will be vulnerable people, women, children, old people that, for example, didn’t move so fast like the model expected or the kid moved too fast for the model to expect.”

Conversations around the ethical implications of technology like the self-driving car are only starting in companies. For now, AI ethics seems very “mysterious” to a lot of companies, Canca said, which can lead to confusion and disinterest. With the board, Canca hopes to spark a more meaningful, engaged conversation and put an ethics-based approach at the core of how companies approach the technology moving forward.

“We can help them understand the issues they are facing and figure out the problems that they need to solve through a proper knowledge exchange,” Canca said. “Through advising, We can help them ask the right questions and help them find novel and innovative solutions or mitigations. Companies are getting more and more interested in establishing a responsible AI practice, but it’s important that they do this efficiently and in a way that fits their organizational structure.”

For media inquiries, please contact Shannon Nargi at s.nargi@northeastern.edu or 617-373-5718.