Northeastern researchers and high-level tech industry leaders gathered for a daylong summit on the Oakland campus to share best AI practices and challenges.
OAKLAND, Calif. — Humans benefit most from AI when it helps them do tasks that would otherwise seem impossible, such as spotting new financial fraud trends, a Northeastern University cybersecurity researcher said.
Jessica Staddon, a computer science professor at Northeastern’s Oakland campus, said artificial intelligence can scan data for potential threats while a human reviews the results — “needle-in-the-haystack work” that would otherwise be too time-consuming.
However, every business uses AI differently, making it hard to create universal standards. That’s why Northeastern researchers and industry leaders recently gathered for a summit at the Oakland campus to discuss best practices and challenges.
“Trust in technology is the most important resource that we have,” Rod Boothby, CEO of IDPartner Systems, told nearly 500 attendees. “We still have an opportunity for a trust infrastructure, but we need leadership and collaboration like this event that Northeastern could pull together.”
Standards are evolving on a case-by-case basis, said Ricardo Baeza-Yates, director of research for Northeastern’s Institute for Experiential AI, which works with industry to develop customized protocols.
For example, when Verizon asked for help implementing AI responsibly, he said, the Institute for Experiential AI created a fairness monitoring methodology as part of the company’s risk-assessment framework. The impact of this tool depends on those who use it and whether they receive the necessary support, he said.
“It’s not just the workflows, it’s the people,” Baeza-Yates said. “What are their roles? What skills do they need?”
Which means it’s important that the people who design, create and test AI-powered products for people with different backgrounds, needs and experiences, said Lili Gangas, chief technology community officer for the Kapor Foundation.
“There’s a lot of bias in AI,” she said. “Unfortunately if you don’t have voices that represent the community that’s going to use the technology, you’re going to have limited technology.”
Wael Mahmoud, technical machine learning lead for trust and safety at Airbnb, emphasized that AI literacy is key to responsible AI workflows. Like many digital companies, Airbnb uses AI to personalize customer experiences. Mahmoud said responsible AI means ensuring fairness across demographics, maintaining transparency and respecting user data.
Meeting these requirements requires businesses to be flexible and monitor their AI-powered tools regularly.
“Many systems think of fairness as a single entity, but populations shift over time,” he said. “If you create a fairness constraint that works now, it may not work a few years from now.”
While responsibility protocols are essential for complex tasks, there are many actions that AI-powered apps already handle well, said Anthropic Head of Safeguards Vinay Rao. During credit card transactions, automated fraud detection tools verify the transaction on behalf of both the seller and the purchaser.
“It works fine,” Rao said. “You can’t have humans doing this anymore.”
Small businesses also benefit from AI, said Kerry McLean, executive vice president and general counsel at Intuit. “Data analysis tools powered by AI allow small businesses to compete on a larger playing field,” McLean said. “There is potential to build economic prosperity and AI helps us do that.”
The summit was the culmination of a series of events hosted by campuses across the Northeastern University network in partnership with the Mills Institute to bring industry, government and policy leaders together to discuss challenges presented by AI.
In the evolving landscape of monitoring AI, some businesses are well on their way to having protocols in place while others have yet to begin, Alan Eng, director of partnerships for Northeastern’s Silicon Valley campus, said at the summit.
“Some companies are taking a deliberate approach,” he said. “And then there are organizations that are still trying to figure it out.”
Eng led focus groups with industry leaders last fall to assess business needs for training and monitoring related to AI. He noted that within industry he hears the terms responsible AI, AI safety and AI security used interchangeably.
To Eng this indicates that a common vocabulary is needed to define terms and roles.
“We have work to do,” he said, “and there’s a conversation to be had. It’s important that we work towards each other to have a common language and training.”