Why do companies struggle with ethical artificial intelligence?

Woman's face is being scanned
Companies are increasingly expected to address issues such as justice and fairness with their artificial intelligence programs, but many don’t know how, Northeastern professors found. Photo illustration by Matthew Modoono/Northeastern University

Some of the world’s biggest organizations, from the United Nations to Google to the U.S. Defense Department, proudly proclaim their bona fides when it comes to their ethical use of artificial intelligence.

But for many other organizations, talking the talk is the easy part. A new report by a pair of Northeastern researchers discusses how articulating values, ethical concepts, and principles is just the first step in addressing AI and data ethics challenges. The harder work is moving from vague, abstract promises to substantive commitments that are action-guiding and measurable.

“You see case after case where a company has these mission statements that they fail to live up to,” says John Basl, an associate professor of philosophy and a co-author of the report. “Their attempt to do ethics falls apart.”

ronald sandler and john basl discuss ethical artificial intelligence

New research by Northeastern professors Ronald Sandler (left) and John Basl (right) recommends federal regulations to help companies articulate their values and ethics with their artificial intelligence software. Photo by Matthew Modoono/Northeastern University

Corporate pledges without proper execution amount to little more than platitudes and “ethics washing,” according to the report, published in conjunction with the Atlantic Council, a nonpartisan think tank in Washington. The findings will be discussed among the authors and other speakers at a virtual event on Sept. 23 at noon ET.

The report recommends greater transparency to help people understand how and why AI decisions are being made about them. “If you deny me a loan and I can’t figure out what caused that decision, then I don’t know what to do for future loan applications,” Basl explains. “Transparency matters.”

The ethically problematic development and use of AI and big data will continue, and the industry will be seen by policy makers, employees, consumers, clients, and the public as failing to make good on its own stated commitments, the report adds.

Most companies are well-meaning and have done a good job of developing formal metrics for benchmarks such as fairness and bias, but they are not really able to pinpoint which of those is going to accomplish their aims, Basl says.

“One of the things this report is meant to do is force companies to reflect cohesively across their values instead of trying to pick and choose,” he says. “It doesn’t tell them how to do any particular thing, but it provides them a process they have to go through if they want to sincerely realize their values.”

The deeper problem big businesses face, he adds, is in not having sufficient resources.

“When I want to say ‘I’m unbiased and I’m being fair,’ what does that actually mean? In different contexts that means different things,” Basl adds.

He and co-researcher Ronald Sandler, director of Northeastern’s Ethics Institute and head of the philosophy and religion department, note that as organizations are increasingly expected to address issues such as justice, fairness, privacy, autonomy, and accountability, they are often asked to do so without the guidance of regulations.

They suggest that a federal agency such as the Federal Communications Commission or the Consumer Financial Protection Bureau may step in to fill the void.

“Regulation is going to have to come,” Basl predicts.

The National Institute of Standards and Technology, a non-regulatory agency within the Commerce Department, issued a proposal in June seeking comments on ways to identify and manage bias in AI.

“We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause,” said NIST.

Basl points to software giant Microsoft as an example of a global company that appears to be doing ethical AI the right way.

“Microsoft, on paper, has a good approach,” he says. “They recognize the need to operationalize ethical principles, they have an ethics board to advise those efforts, and they have researchers dedicated to core ethical issues in AI. They seem to be putting in a good faith effort to try to get these things right.”

Microsoft is in the process of acquiring AI and healthcare speech technology firm Nuance Communications for $20 billion. “AI is technology’s most important priority, and healthcare is its most urgent application,” Microsoft CEO Satya Nadella said when the deal was announced in April.

Even when companies take steps to bolster ethics with their technologies, it can sometimes backfire. Google pulled the plug on an internal AI ethics board in 2019 just one week after announcing it, following employee opposition to some of its members.

“Corporations are so big and have so many motivational threads that instilling this stuff in a clear, careful way that actually makes any progress is really hard,” says Basl.

The ethics report is a follow-up to earlier research by Sandler and Basl that encourages companies to have properly resourced ethics committees to mitigate digital risks and maintain trust with the public. That 2019 report was driven by growing concerns over data use and security in light of the prevalence of AI technologies.

“The first report tells you how to build the oversight capacity to do ethics, and the second report outlines part of the job of that ethical capacity,” Basl says.

For media inquiries, please contact media@northeastern.edu.