Is facial recognition identifying you? Are there ‘dog whistles’ in ChatGPT? Ethics in artificial intelligence gets unpacked

graduate students in a classroom for AI ethics summer program
Graduate students in the summer long AI and data ethics research program present to one another in Renaissance Park. Photo by Alyssa Stone/Northeastern University

The proliferation of cameras in so many public spaces raises an ethical issue: Is facial recognition technology identifying us as we go to the airport or visit our favorite store?

The development of artificial intelligence prompts many ethical issues, whether it’s facial recognition or ChatGPT. 

“There’s a lot of interesting work on facial recognition and the privacy concerns that come with the use of facial recognition in public spaces,” says Clint Hurshman, one of a dozen graduate students who took part in the Northeastern University Ethics Institute’s summer training program to expand the Al and data ethics research community.

Hurshman, a doctoral student at the University of Kansas, focused on privacy and facial recognition technology for his final project. He analyzed the role consent plays in individuals being recorded in public and the ethical issues that arise in that conversation. 

“What I tried to show even on some alternative counts of privacy, we end up with the same conclusion that facial recognition technologies are really problematic for privacy,” he says.  

The graduate-level program at Northeastern is designed to teach researchers how to examine artificial intelligence and data systems through an ethical framework. The course is conducted by the Ethics Institute, an interdisciplinary effort supported by the Office of the Provost, the College of Social Sciences and Humanities (CSSH) and the Department of Philosophy and Religion.

“The idea is there’s a lot of demand in the field for people who can speak to the ethical dimensions and political social dimensions of AI and data science in public life,” says Kathleen Creel, assistant professor of philosophy and computer science at Northeastern and one of the lead organizers of the program. 

In the course, students covered issues around data privacy, racial bias and accessibility, Creel explains. 

Students started off the nine-week course by exploring the technology more holistically and their impact on society. From there, they took a deep dive into how algorithms have changed over time.  

The aim of the course was to both provide students with some background on the technical components underpinning these systems as well as the frameworks used to adequately analyze their ethical impact. 

Throughout the seminar, students each day were tasked with providing oral arguments based on the day’s reading. Each student was also tasked with developing an original thesis around the topic of discussion and presented it the final week of class. 

One central topic of discussion was algorithmic fairness, Creel says.  

“We’re looking at bias, what kind of fairness metrics are appropriate, and what algorithmic bias is,” she says.  

Students also took a look at how disinformation impacts large language models and the monoculture consequences that arise when there is too much of a reliance on one specific model.

The advent of large language models has helped propel generative artificial intelligence technologies into the public’s attention. ChatGPT is a large language model, for example. Part of how these models work is by processing large swaths of text to be able to form original sentences on their own based on predictive patterns.  

Anja Chivukula, a doctoral student at the University of Southern California, Los Angeles, asked what impact could dog whistles—coded language that on the surface may seem harmless but can have a political meaning—have in training these models.

“For instance, the word ‘cosmopolitan’ has a pretty straightforward meaning, someone who lives in a city,” says Chivukula. “But it’s also used by neo-Nazis to single out Jews.” 

In addition to offering seminar classes, the training also featured professional development workshops centered around grant funding and scholarships, as well as an expert speaker series featuring ethics researchers already working in the field. 

“While the seminar component and developing research skills is a really important part, we realized that is not the full picture of being a good junior researcher,” says Creel. “You have to have good technical skills and be able to be a good partner on interdisciplinary teams.”

One highlight speaker was Helen Nissenbaum, a professor of information science at Cornell Tech who studies digital privacy and related topics. 

Ronald Sandler, director of the Ethics Institute, interim dean of CSSH, and professor of philosophy and religion at Northeastern, highlighted how stakeholders around the world are paying close attention to the developing technology. 

The problem, Sandler notes, however, is that there aren’t many programs specifically centered on ethics concerning AI and data because it is a relatively new area of study. 

“There are regular calls for responsible AI and big data. That is, systems that are fair, transparent, trustworthy, private, and so on,” he says. “But what exactly that means is still being specified with respect to both the core ethical commitments and how those commitments can be operationalized in sociotechnical systems. So there is urgent, fundamental ethics research to be done on core concepts and principles related to things like transparency, justice, privacy and responsibility in the context of these systems.”

Northeastern is poised to take on the challenge since it has “a large cohort of people with expertise in this area,” he says. 

“There’s really good technical people creating really amazing technologies, and they’re really good people being trained in ethical theory and ethical analysis,” Sandler adds. “What this program does is train people who can work at the intersection of these two things.” 

The summer program is primarily being funded by the National Science Foundation, which awarded the institute two grants—an initial grant of $299,375 and an additional top-up grant of $59,874 to allow more students to participate.

The NSF grant money provides the institute with enough funds to host the program for three years, according to Creel. The program also has the support of a number of organizations on campus including the Department of Philosophy and Religion, the Institute for Experiential Robotics, the Institute of Experiential AI and CSSH. 

The plan is to search for outside funding to keep the program going after the NSF money runs out, she says. 

The institute also offers courses on AI and data ethics for high school and undergraduate students, Creel explains. It works closely with the Khoury College of Computer Science to “integrate ethics throughout the undergraduate computer science curriculum,” according to its website.

There are also numerous researchers at the institute working in this space and in the university overall, Sandler explains. 

“In the Ethics Institute alone there are a dozen researchers working in this space, and there are responsible AI researchers throughout the university in other institutes, centers and colleges working on things ranging from responsible robotics and cybersecurity to how to use AI and data effectively in local governance and to address unjust hiring and lending practices,” he says. 

One of the main objectives of the Ethics Institute is to better explore the impact these emerging technologies have on society from an ethics standpoint, Sandler highlighted. 

“This program is just one part of a larger set of programs that we have developed at Northeastern and the Ethics Institute to build an ecosystem of AI and data ethics, not just at Northeastern, but as a field. It’s similar to the way bioethics as a field was created in the ’70s,” he says.  

Cesareo Contreras is a Northeastern Global News reporter. Email him at c.contreras@northeastern.edu. Follow him on Twitter @cesareo_r.