Artificial intelligence is here, but the technology faces major challenges in 2023

ai generated image of an abstract human head
Northeastern experts say AI broke new ground with the public in 2022, even as ethical questions and misunderstandings about the technology lingered. Those questions and conversations will continue to dominate in 2023. Getty Images

Although artificial intelligence has been present in our lives for years, 2022 served as a major proving ground for the technology. Between ChatGPT, AI art generation and Hollywood embracing AI, AI found a new kind of foothold––and hype––with the general public. But it also came with a fresh wave of concerns about privacy and ethics.

With all that 2022 did to raise the profile of the technology, AI experts at Northeastern University say 2023 will be an equally major year for the future of AI––but it will also face its fair share of challenges.

Usama Fayyad, executive director for the Institute for Experiential AI at Northeastern, says the hype around AI wasn’t the only thing that defined the technology’s trajectory last year. As the public profile of AI grew in 2022, so did the misunderstandings and misinterpretations around it.

“There were definitely significant contributions in terms of demonstrating what AI could do but probably not realistically explaining the limitations and what’s possible,” Fayyad says.

He says the recent conversation around AI is defined by the tension between a fear that the technology will automate human jobs and “the more realistic understanding” that AI tools will augment, not replace, human capabilities. Educating the public about the potential and limitations of AI will be more important than ever in 2023.

“It’s very sad that in that discussion [about ChatGPT] we lose that a lot of this is dependent on the human in the loop,” Fayyad says. “There will be a good maturation around this whole notion that it’s not about automating humans out of the loop; it’s about bringing people into the loop in the proper way.”

That doesn’t mean that AI won’t have a disruptive effect on specific areas of life. Fayyad says people in higher education, recruitment and creative fields are already feeling understandably threatened by the technology. 

Educators are already panicking about ChatGPT and its ability to let students have Open AI’s chatbot write entire essays for them in seconds. It’s a new kind of cheating that is potentially undetectable, and Fayyad predicts that 2023 will see new tools designed to combat this kind of behavior. A Princeton student has already designed an app to detect whether an essay has been written using ChatGPT.

“That means we will see a significant growth in technologies for countermeasures and detecting when something like that happens,” Fayyad says.

At the same time, Fayyad and Rahul Bhargava, assistant professor of art, design and journalism at Northeastern, agree that the world of education will have to adapt to the technology, not just ban it.

“Will there be new ways of writing? Sure,” Bhargava says. “As a professor, am I worried about this? Sure, but we’re already using AI stuff with our students in the journalism department here. We’re trying that stuff and we’re figuring it out, and we’ll figure it out.”

AI could potentially be a catalyst for educators to reexamine their methods in 2023.

“Is there a different artifact that you can show me that you learned [something], that isn’t a two-page written essay that a computer can generate?” Bhargava says. “Why would I want a student to regurgitate information? That’s not learning.”

Whether AI will replace human jobs is less important than more vital ethical questions that need to be addressed in 2023, Bhargava says. The more pressing concern is “who’s making these things and what questions are they asking about what biases are baked into it.” When tools like ChatGPT are designed by teams with limited perspectives and diversity, the result is a tool lacking in perspective.

“These systems that get built … are mirrors for our culture and our practices,” says Bhargava. “Which way do they point and who’s looking in them? No, they don’t embed bias; they reflect it.”

There are some measures being taken to address the ethical questions around AI bias. Dakuo Wang, associate professor of art and design and computer science, says ChatGPT’s real innovation is how it uses human data labelers during the process of training the AI to limit bias and increase accuracy.

But even then, the technology is only as good as the data it’s been trained on. Without the right data, the inaccuracies and limitations become much more obvious––and potentially dangerous.

Wang anticipates that cases like ChatGPT will help the public, private industry and research community learn that “data is the key.” 

“Who has the data and how can they transform that existing data into a format that they can [use to] fine tune or jumpstart their own version of the model?” Wang says. “That part will become more and more important.”

Despite efforts to reduce bias in these technologies, the hazards of AI, particularly in law enforcement and the prison system, are well-documented, particularly among minority populations in the U.S. What Bhargava says is new in 2023 is how these technologies are starting to impact the majority.

It’s why Northeastern’s experts anticipate that AI laws and regulations will start to develop in 2023––even if it is at a snail’s pace compared to the technology itself. Last year, New York City adopted a law that went into effect at the beginning of 2023 that restricts the use of AI in hiring practices. 

“It might be too early to get to that regulatory side of it, although where we are now seeing definite maturation and acceleration is when it comes to things like privacy, when it comes to the unfair use of AI or deciding who bears the responsibility when an AI algorithm grows unintended or intended bias,” Fayyad says.

Cody Mello-Klein is a Northeastern Global News reporter. Email him at c.mello-klein@northeastern.edu. Follow him on Twitter @Proelectioneer.