Skip to content

As artificial intelligence transforms gaming, Northeastern researchers urge industry to adopt responsible AI practices

A graphic of several gaming consoles on a purple background.
Video game makers are using AI to help develop their games. They should ensure they are using those tools responsibly, according to these Northeastern researchers. llustration by Renee Zhang

Despite recent economic headwinds, the video game industry continues to be one of entertainment’s most profitable businesses.  

And that isn’t an accident; game studios are spending hundreds of millions of dollars to make their games bigger and more immersive to entice both new and longtime players.

Graphics are edging closer into uncanny valley territory and companies like Meta and Sony continue to blur the lines between the physical and digital worlds with their mixed reality headsets and other sensing technologies. 

Artificial intelligence, meanwhile, has been hailed as the next big thing in technology, and video game makers are using it for a myriad of processes, including programming non-playable characters, creating procedurally generated levels, moderating game chat logs and customizing and personalizing in-game experiences.  

For sure, the industry is operating on the cutting edge, but an often neglected aspect is the ethical challenges that developers have to contend with when using these technologies to make their games. 

There are important discussions to be had around data privacy, biased algorithms, and enticing game loops made more addicting with the assistance of AI.   

So as AI continues to take on a bigger role in game development, how should game makers use it more responsibly? 

Researchers at Northeastern University address that question head on in a newly published ACM piece, suggesting that the tools and frameworks being developed for the emerging field of responsible AI should be adopted by the gaming industry. 

“It’s obvious that the gaming industry is moving toward a different level of risk with AI, but the ethics aspect is lagging behind,” says Cansu Canca, director of responsible AI practice at the Institute for Experiential AI and one of the authors of the research. 

“There is a concern coming from game designers that without appropriate ethics guidance, they also don’t know how to navigate these complex and novel ethical questions,” she adds.

Other authors on the piece include Northeastern reserachers Annika Marie Schoene and Laura Haaber Ihle. 

While AI is used in game development in a number of different ways, the authors target a few specific pain points, including how game developers use AI to develop game mechanics, how generative AI may be used in image creation and other creative pursuits, and how companies are collecting and using user data. 

Game studios can often be massive operations with multiple departments spanning dozens of teams and individual workers. The bigger those companies are, the more likely there are to be silos and a breakdown of communication, the researchers say. 

An effective AI ethics framework, therefore, is designed from its inception to be integrated into every part of the company’s hierarchical system. In practice, a comprehensive AI ethics framework can help all parties — from designers and programmers to story writers and marketers — make better decisions about specific AI use cases and should be based “on fundamental theories in moral and political philosophy,” the researchers say.

“In some sense, the broader responsible AI framework doesn’t necessarily change when applied to gaming,” Canca says. 

For example, a risk assessment tool can help game developers weigh the pros and cons of certain use cases before implementation, the researchers write. A comprehensive risk assessment outlines the strengths and weaknesses of the proposed use and technology as well as potential positive and negative outcomes. 

“Using an AI ethics risk and impact assessment tool could enable game designers and developers to assess both the game and the incorporated AI systems for their impact on individual autonomy and agency, their risks of harm and potential benefits, and the distribution of burdens and benefits within the target audience as well as the broader society,” they write. 

The researchers stress this is just one tool developers can take advantage of in a larger framework to assist in this process. Other tools that should be a part of this AI framework include bias testing and error analysis of certain AI technologies. 

Another important consideration in AI development is how companies are using data to help train their models and increase their bottom line. 

The researchers suggest that the use of rating labels similar to the current Entertainment Software Rating Board system used in the United States falls significantly short from communicating the risks associated with data and AI ethics to the gamers.

Instead, approaches similar to model cards and AI and data labels that are developed within the responsible AI field would bring more transparency to the process, the researchers explain, and include detailed information including how the developers used AI in the making of the game and how a user’s specific data is being collected and used within the game. 

“Data is incredibly valuable, and that kind of information might not just be used for retraining a model,” says Schoene. 

“I would like to know what is happening to my data. I would like to know how my playing style informs and influences any kind of agents within the game,” she adds.

But Schoene highlights that used wisely, AI could actually help expand the industry and bring in new players  

“Yes, there are all these tradeoffs to be had,” she says. “There are dangers, but there’s also positives. AI could be used to make games more accessible with people with differing abilities if it’s done right.”