Skip to content

AI is running rampant in health care. They want to fix that.

AI has been widely adopted in health care systems nationwide, but there is still no central framework for how to do so ethically. By creating a universal guidebook, researchers at Northeastern hope to fill that gap.

A female doctor, surrounded by screens in a dimly lit room, uses an advanced imaging device with a movable arm.
AI is broadly used in health care, but there is still no centralized guide for how to use it ethically. (Photo by Stefan Rousseau/PA Images via Getty Images)

The health care industry entered a brave new world when it began using AI in medical imaging, in electronic medical records, and for initial checks and mental health support. 

Even as algorithms that detect patterns in X-rays and health records have helped save clinicians’ time, AI software has led to serious missteps in patient care, with racial bias incorrectly deprioritizing Black patients’ care and elderly patients being denied insurance coverage for procedures that would otherwise have been approved. 

The “pick and mix approach” that most of the industry has used with AI, coming up with rules on the fly as opposed to comprehensive guidelines and regulations, is starting to fray, said Annika Schoene, an assistant professor of public health and health sciences at Northeastern University. 

“We use very little regulation when it comes to AI,” Schoene said. “I’m telling you right now, if we don’t get to grips with it, good luck.”

Having a universal guide for integrating AI into hospitals, for example, would not only help health care IT workers but clinicians as well at a time when AI literacy is still lacking.

Schoene, a computer scientist whose work focuses on AI safety, now has a grant to do just that by building a universal guide on ethical use of AI in health care. The project, which is just getting underway, brings together computer scientists, public health researchers, ethicists and health care workers to answer a key question in health care technology: How do you teach technical experts about ethics and doctors, who are already ethically attuned given their profession, about technology?

“In AI ethics, we refer to various values and ethical goals but often they remain aspirational rather than operational,” said Cansu Canca, director of responsible AI practice at Northeastern’s Institute for Experiential AI. “What we aim to do is to provide a framework where these values can be turned into design and development decisions as well as monitoring requirements, in the technical language that developers understand, while being grounded in the domain knowledge that is provided by health care experts.”

Without clear ethical guidelines, the more than 1,200 AI-enabled medical devices currently approved by the U.S. Food and Drug Administration (FDA) also present potential security and health risks, Schoene said. Only 8% have a plan in place to follow how their product is being used after FDA approval.

The first part of putting together the guide involves the research team, which also includes Robert Leeman, chair of Northeastern’s public health and health sciences program, Northeastern associate clinical professor Michael Bessette, and Agata Lapedriza, a principal research scientist with the Institute for Experiential AI, working in tandem with large health care systems to get a sense of what health care workers actually need to learn about AI. Schoene acknowledged that the most widespread uses of AI in health, like generative AI listening to interactions in a doctor’s office to transcribe those conversations, are meant to reduce workloads for people who are already at capacity.

Northeastern Global News, in your inbox.

Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.

Getting health care workers used to asking questions about AI upfront, whether they’re implementing AI tools or using them, is the end goal, Schoene said.

“That’s why we’re so intentional about having workshops with people who don’t necessarily know the technical details,” Schoene said. “AI literacy is so low still that … [it’s worth teaching] someone to know to take a moment and take a pause to say, ‘Hold on, this should ring my alarm bell.’”

But Schoene wants the guide to be more than just an introduction to AI ethics. She sees it as the go-to guide at every step of the process for health care systems exploring a technology with few guardrails but clear limits, she said.

For example, if a hospital wants to use a new AI-integrated image detection tool for breast cancer, administrators would consult the researchers’ guide when they are considering purchasing the software to see if it clears a still-in-the-works set of guidelines. Once purchased and installed, the hospital tech workers who manage the tool could again consult the same guide to understand what they need to be monitoring going forward, like a tool’s privacy settings around patient data.

Schoene recognized that for both healthcare workers and her research team, crafting a universal AI ethics guide is like trying to hit a moving target. The technology is evolving so quickly and in directions that even computer scientists are struggling to stay on top of, she said. As a result, the guide will be a living document that Schoene and her team adapt and update over time.

“Hopefully this blueprint will, in some way, shape or form, equip some technical person in the health care system to either push back or know at the end of the day how to speak to a clinician as to what the technology should be or shouldn’t be doing,” Schoene said.