Skip to content

What do corporations need to ethically implement AI? Turns out, a philosopher

Director of Responsible AI Practice and research associate professor Cansu Canca has been named one of Mozilla’s Rise25 for work that “fosters an AI environment of equality and empowerment.”

Cansu Canca gesturing while speaking.
Cansu Canca, director of Responsible AI Practice at the Institute for Experiential AI and a research associate professor at Northeastern University. Photo by Alyssa Stone/Northeastern University.

Cansu Canca is full of questions — but that’s her job.

The director of Responsible AI Practice at the Institute for Experiential AI and a research associate professor in the department of philosophy and religion at Northeastern University, Canca has made a name for herself as an ethicist tackling the use of artificial intelligence.

As the founder of the AI Ethics Lab, Canca maintains a team of “philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,” she says.

Canca has also worked with organizations like the World Economic Forum and Interpol.

But what does “ethical” mean when it comes to AI? That, Canca says, is exactly the point.

“A lot of the companies come to us and say, ‘Here’s a model that we are planning to use. Is this fair?’” 

But, she notes, there are “different definitions of justice, distributive justice, different definitions of fairness. They conflict with each other. It is a big theoretical question. How do we define fairness?”

“Saying that ‘We optimized this for fairness,’ means absolutely nothing until you have a working,  proper definition” — which shifts from project to project, she also notes.

Now, Canca has been named one of Mozilla’s Rise25 honorees, which recognizes individuals “leading the next wave of AI — using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity,” the organization wrote in its announcement.

Awarded to five individuals in each of five categories, Canca was named to the “Change Agents” category, among others whose “work fosters an AI environment of equality and empowerment,” the organization wrote.

“The biggest risk that AI holds comes from the fact that it is systematic and efficient. So the risk is,” Canca points out, “if you create a bad system, it is systematically and efficiently bad.”

A poorly designed or poorly implemented system, she continues, could be “systematically and efficiently discriminating, for example. 

“But the flip side of it is that if you create a good system that is better than now, it [will be] systematically and efficiently good, and will make it systematically and efficiently better than now.”

The arrival of artificial intelligence has brought tremendous instability to business and industry, Canca says, but it also brings with it tremendous opportunities for change.

“If we haven’t yet, we have to face the fact that the world is a massively unethical place, and we create a ton of suffering, needless suffering,” Canca says. “Any shift is an opportunity to fix some of those [problems], and any shift holds the risk that we will just create more.”

“This is a great time, way, reason to look at what we are doing and say, does this life make sense?”

“We ask these questions all the time when we make public health decisions, environmental ethics decisions, business ethics decisions,” Canca notes. “It’s just that now, this is a different application area” with its own dilemmas and unique opportunities.

But while “everything is basically a philosophical question,” Canca says, the key to equitable choices has to come at the beginning of the design process, as part of a “systematic practice.”

Fairness, equity, justice — and what exactly is meant by those terms — has to be made “a part of the design of the AI system and AI product,” she says. 

She calls this an “ethics by design approach, where you don’t just make the value judgment, you turn it into a design action.”

Recently, Canca has begun considering how to incentivize corporate interest.

“When you think about responsible AI implementation, incentives really matter,” she says. “We usually look at incentives from the policy perspective, but, you know, the market is faster, and more agile.”

Responsible investing would give companies “another reason to — not just follow the policy,” she says, but to consider, “‘How can we get the best investment?’ Well, maybe if you have better practices,” investors will take notice.

The artificial intelligence revolution isn’t slowing down, and neither is Canca. She recently collaborated on the Responsible AI Playbook for Investors, published through the World Economic Forum, and, in the public sector, has “created a toolkit for law enforcement and trained global police officers” in responsible AI use.

“Organizations like law enforcement,” she says, are “in desperate need of using systems that are useful and would help with their extremely difficult tasks. But they are also one of the most — if they use it badly — they are one of the most critical ones that would create the most harm.”

Originally interested in the ethics of health and medicine, Canca’s early subjects were patients, physicians and insurance programs. But when AI systems arrive in health care, “they do predictions, they do resource allocation,” she says. “And those are very value-laden decisions — but nobody knows how the system really works.”

She assumed at first that other ethicists were already working on this question — but when she looked around, “I couldn’t find any group that was working on this in practice, in collaboration with the industry from an ethics perspective, meaning philosophy,” she says.

“You have lawyers working on it, you have AI scientists working on it. But the question that we are asking, the fundamental question of, ‘What is the right thing to do?’ is an ethics question.”

“We learn from all these other fields,” she continues, “but the question itself is a philosophy question.”

The year before the COVID-19 pandemic, Canca gave talks around the world on this overlooked subject. “You can’t answer AI ethics questions in practice if you do not have people who are experts in AI, and if you don’t have people who are experts in ethics.”

“I pushed really, really hard.”

All that effort hasn’t gone unnoticed. The Mozilla Rise25 awards ceremony will be held Aug. 13 in Dublin.