Responsible AI begins with ethics, security and transparency, Northeastern experts say

Speaker gesturing at the AI Business Leaders Conference.
Ardeshir Contractor, director of research for the AI for Climate and Sustainability (AI4CaS) at The Institute for Experiential AI. Photo by Matthew Modoono/Northeastern University

Ronke Ekwensi, chief data officer for T-Mobile, can explain her job pretty easily, at least from the 30,000-foot perspective.

“The way I think of my role as CDO is to really focus on extracting value from data,” she said recently in East Village on Northeastern’s Boston campus. 

That description is simple enough, but it’s in the fine details of analyzing and managing that data for one of the world’s top wireless service providers where the complexities of the job arise. 

One topic that has been a center of focus as of late has been artificial intelligence and how to best integrate it into T-Mobile’s operations.

Ekwensi, a Northeastern graduate, was one of more than a dozen executives who spoke during a business conference hosted and organized by Northeastern’s Institute for Experiential AI. The theme of the event was “Leading with AI Responsibly.” 

Leaders from companies such as Google, McDonald’s and Intitut were among the speakers. 

Ekwensi, who earned a doctorate in data policy from the university in 2022, is keenly aware of the important relationship between data and AI. Data is often called “the fuel” or “the lifeblood” of AI, and for good reason. Without access to any kind of data, your favorite AI-powered chatbot would simply not be able to function. 

To implement and develop these systems in a just manner, Ekwensi outlined a number of parameters she thinks about.

Paramount to using these systems is for users to have a common understanding of what it means to implement and use AI responsibly.

“The bad news is that there is no common definition for responsible AI, but what we do have is common thematic elements,” she said. “Those common thematic elements determine the areas of risks associated with AI.” 

So, what are these common thematic elements? They include ethics, fairness, accountability, security and transparency, Ekwensi explained. 

“It’s really about the codification of an organization’s value, language that sets guardrails because you can’t think about every permeation of risk at every point,” she said. “What you have to do is create guardrails that help your organization develop and use AI responsibly.” 

There are certainly challenges in providing clear guidelines and direction. AI technologies like large language models are often referred to as “black boxes” since we largely don’t understand much of how they work, she said. 

“What generally keeps me up with LLM is there’s an ossification of explainability that is really concerning to me,” she said. “How we think about it and how we address it, is going to be a challenge. I don’t have any answers right now.”   

When it comes to actually using AI, it’s important to be very clear and deliberate about what exact task you want the model to complete, she said.  You run into issues when you try to implement AI to a swath of tasks haphazardly. It’s better to start on a specific task. 

“When you think about a well-defined use case, you get an opportunity to make decisions about model technique. You get an opportunity to make decisions about what data you need for a specific model,” Ekwensi said. 

Humans do and will continue to play an integral role as the technology matures and improves. Furthermore, data management and protection are also critical. She pointed to the use of encryption technologies and other tools that can be used to keep customer data safe and in line with regulatory requirements. 

Northeastern researchers are certainly doing their part here as part of the Institute for Experiential AI. In a series of lightning talks, a number of faculty members in the institute spoke about their work. 

Eugene Tunik is the director of AI + Health at the institute. Tunik is exploring ways AI can be used in the clinical setting and be a “member of the clinical decision support team.” 

AI has unlocked advancements in a number of health care settings. From at-home care and detecting diseases early to speeding up the development of new drugs and educating medical students, AI is advancing the space rapidly.   

“We have a very large team at the AI + Health vertical consisting of postdocs, researchers, scientists and faculty that are scattered between Boston and Portland, Charlotte and other campuses,” Tunik said.  

Sam Scarpino, director of AI + Life Sciences, said his team is focusing on using data science, artificial intelligence and modern analytics to help solve some of the medical industry’s most vexing problems that impact patients and medical workers.

In the cases of pandemics, he played out ways technology and data could be used to minimize disruptions. 

“We could be leveraging high-resolution data mobility to improve contact tracing methods to keep schools open, to keep businesses open, similar to what they do in South Korea,” he said. 

Additionally, his team is focusing on “developing ethical and responsible frameworks to ensure that data are being used for the purpose that they were collected for.” 

“Often data are collected for a specific use case but sit on a shelf collecting dust, which in many cases violates the individual whose data are contained there just as much as a potential privacy violation,” he said.  

Ardeshir Contractor discussed his involvement as the director of research for the AI for Climate and Sustainability (AI4CaS) focus area.

Contractor said the group works with members of the institute and industry partners to see how they address the climate crisis. 

“Our work in AI has been used for large climate models,” he said. 

“Our approach is to try and combine specialties,” he added. “We bring computer science. We bring in engineering data. I think the most important thing is that we want to create a focus on resilience.” 

Cesareo Contreras is a Northeastern Global News reporter. Email him at c.contreras@northeastern.edu. Follow him on X/Twitter @cesareo_r and Threads @cesareor.