Featured
The most savvy business leaders know that developing a strong artificial intelligence governance framework is not just the right thing to do, it’s actually just good for the bottom line — especially as investors become more knowledgeable and regulators come knocking.
And implementing any AI system responsibly starts with thoughtful leadership at the executive level, according to Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI and a research associate professor in philosophy.
“It has to be the case that the leadership takes it upon themselves and really champion this,” Canca says. “Because without them, it’s just not going to stick.”
That top-down approach works across industry, Canca says, from a financial services company looking to reduce the time required to analyze thousands of data sets or a big-box retailer looking to automate its customer service operations.
“It’s not only the ethicists or the developers that are worried about what may go into an AI system,” she says.
Canca was the faculty lead at this week’s “Responsible AI Executive Education” course held on Northeastern’s Boston campus. The two-day training was designed for current and aspiring C-Suite executives looking to pioneer responsible AI practices in their organizations.
The teaching team included Ricardo Baeza-Yates, director of research at Northeastern’s Institute for Experiential AI, AI ethicists Laura Haaber Ihle and Matthew Sample, research scientist Annika Marie Schoene, postdoctoral researcher Muhammad Ali, and senior fellow Steve Johnson.
As the CEO of Notable Systems Inc., which uses AI to help health care providers with data input and entry, Johnson has developed his own AI framework in the form of five rules, which he often shares with business executives.
Rule No. 1: Don’t believe the magic.
“If a vendor says, ‘Plug it in. It just works. You guys can find something else to do,’ it’s probably not true,” Johnson says.
A company may also claim its system is 99% accurate, but users need to care about the 1% of the time the system fails, because that’s when they could run into problems, he says.
Rule No. 2: Don’t become distracted by the words “artificial intelligence.”
“I like to replace it with the word ‘software,’” he says. “Proceed as you have in your career with any software system. Look into its intricacies and its peculiarities, its strengths and weaknesses. You always do that with any software system.”
Rules Nos. 3 and 4 go hand in hand. First, trust but verify your system is working correctly on a continual basis. Second, check in with your vendor to make sure they have a system in place that allows users to check the competence of a machine to do particular tasks.
Johnson used his own company as an example. He works in data entry and the AI systems he works with let users know when they fail to recognize a particular piece of information.
Rule No. 5 is to ask vendors how their system helps close the loop.
“What that means is how does the system collect and learn from the result, and from any human intervention that happens?” Johnson says. “Because the human element, your judgment and guidance, is a golden source of truth. That’s what’s going to keep your system safe and sound and improving.”
Andrew Grover, chief risk officer at Bangor Savings Bank, thought the conference was a helpful and useful dissection of the issue.
He found Johnson’s five rules an easy guide to follow. Bangor Savings Bank is hoping to take advantage of AI to increase efficiency, he says.
“I keep quoting Jurassic Park,” he says. “I think Jeff Goldblum’s character made the comment, ‘Just because we can, does it mean we should?’ I keep saying that with AI. There’s a lot of things we can do, but we constantly have to challenge [ourselves] and do the right thing.”