Why responsible AI is important to the future of business. Northeastern events will address best practices

lines of code on a screen
The Institute for Experiential AI has a series of upcoming events centred around ethics and responsibility. Photo by Matthew Modoono/Northeastern University

Over the past few years, companies large and small — in industries ranging from industrial manufacturing and biotechnology to consumer electronics and health care — have shouted about the transformative impact AI will have on their businesses and humanity as a whole. 

But as technology companies like Open AI, Midjourney, Google and Microsoft continue to develop these technologies at a rapid pace, questions have risen about the ethical implications. 

How are these AI systems being trained and developed? What can be done to make sure they are being created and implemented fairly and justly? What resources can be provided for the end users to help them better understand how these technologies work? 

Those kinds of questions have certainly been top of mind for researchers at Northeastern’s Institute for Experiential AI. This month, the institute will host a series of events about “Leading with AI Responsibility,” including a workshop and conference.

One goal of the events is to help demystify the technology and help business leaders and the public be better informed about how AI models are actually developed in the real world, says Usama Fayyad, executive director at the Institute for Experiential AI.

“There’s a lot of misunderstanding about AI, especially in academia and the public,” Fayyad says. “The reason this is called the Institute for Experiential AI (is because) Experiential AI is our code word for ‘humans in the loop,’” he says, noting that companies like Google hire armies of people to review the output these AI models make. 

The series of events kicks off Oct. 17 with an invitation-only workshop titled “Shaping Responsible AI: From Principles to Practice.” The workshop will be led by Cansu Canca, director of responsible AI practice and co-chair of the AI Ethics Advisory Board at the Institute for Experiential AI, and Ricardo Baeza-Yates, director of research and co-chair of the AI Ethics Advisory Board at the Institute for Experiential AI.

In the workshop, participants will work to “define, discuss and develop the essential elements of robust RAI (Responsible Artificial Intelligence) frameworks, best practices and ‘grand challenges,’” according to the institute’s website. 

But what does it really mean to build AI responsibly? It starts by bringing a diverse set of voices into the conversation, says Canca. 

“The core of the question lies in ethics,” she says. “But practicing responsible AI, developing these systems, designing these interfaces, putting them into practice in businesses, all of these require expertise from multiple perspectives. You need computer scientists who are working in this field. You need designers. You need policy people.” 

More than 40 AI leaders — including business executives, academics and policy makers — were invited to take part in the workshop.  

On Oct. 18, the Institute will host the “AI Business Leaders Conference” at East Village on the Boston campus. The all-day event will feature heavy hitters working in industry, including executives from Google, T-Mobile, Intuit and McDonald’s. 

The institute prides itself on taking a multidisciplinary approach to understanding AI. The event will feature panel discussions centered around a variety of topics and issues, including AI in finance, enterprise use cases and venture capital fundraising. 

Sam Scarpino and Eugene Tunik will serve on the panel “AI for the World’s Big Challenges: Health, Life Sciences, Climate and Sustainability.” Scarpino is the director of AI+ Life Sciences at the institute and Tunik is the institute’s director of AI + Health. Joining them will be Ardeshir Contractor, director of strategic research projects at the institute and a member of its AI for Climate and Sustainability (AI4CaS) focus area.

“AI is still not really utilized in health care at any appreciable scale. In fact, health care is the fifth-least digitized sector,” says Tunik, who is also a professor in the Department of Physical Therapy, Movement and Rehabilitation Sciences. “The AI+Health division at the Institute for Experiential AI at Northeastern University is focusing on ways that AI can be better utilized by health-care professionals.

The potential use cases for AI in health care are vast, Tunik adds. With AI, clinicians could improve their diagnostic capabilities and create more tailored rehabilitation plans for individuals. 

But it’s important these technologies are analyzed and studied closely, especially when it comes to using them in a medical setting, Tunik explains. 

“Health care follows a key tenet: ‘Do no harm.’ We still don’t fully know the potential biases and risks of AI. The stakes are too high in health and health care to deploy technology that is unvetted,” he adds. “This is why we are approaching any AI+Health project — whether it is a research project, a clinical project for a health-care organization, or an industry project — with an AI ethicist as part of the team. This allows us to develop the right solution in the proper way.” 

Following the conference, the Institute’s annual “AI Career Fair” will be held on Oct. 19 at the Curry Student Center. 

The fair is designed to attract undergraduate and graduate students interested in pursuing roles that will require them to use AI, data science and large language models. 

Cesareo Contreras is a Northeastern Global News reporter. Email him at c.contreras@northeastern.edu. Follow him on Twitter @cesareo_r and Threads @cesareor.