Skip to content

AI could revolutionize drug discovery. But how can we regulate it?

In health care, AI is moving at warp speed, while regulations lag behind. Jared Auclair proposes a new regulatory framework that would keep AI innovation moving and get drugs to patients in need.

A researcher holding a specimen and injecting it with purple fluid from a syringe.
The AI-enabled Ecosystem for Therapeutics is a broad regulatory framework for AI in healthcare, specifically drug discovery and development. Photo by Matthew Modoono/Northeastern University

From the beginning of the modern AI revolution, health care has been one of the most promising areas for innovation. It is already playing a key role in the discovery and development of new drugs that treat everything from cancer to tropical diseases.

However, while artificial intelligence is advancing at warp speed in the biotech and health industries, the regulations designed to keep this technology in check have lagged behind. Researchers at Northeastern University have developed a tool to address that gap and get everyone on the same page about AI’s lifesaving and potentially problematic impact on health care.

“There are thousands of documents on how to regulate AI and AI products from all kinds of places all over the world, and you would not be surprised to know that they contradict each other,” says Jared Auclair, a Northeastern chemistry and chemical biology professor and dean of the College of Professional Studies. “We’ve used AI to develop a tool to aggregate all those things and then to try to give you best practices and thoughts on what to do with them.”

From the U.S. Food and Drug Administration to the European Medicines Agency, governments around the world are wading into the new frontier of AI and health regulations. The problem is that those regulatory solutions are isolated. Auclair, who also serves as the dean of Northeastern’s College of Professional Studies, and his co-authors designed the AI-enabled Ecosystem for Therapeutics, or AI2ET, to be a foundational guidebook for regulating AI in health care, no matter where you are.

“Fundamentally, not so different from the world, we all exist in silos and don’t lean into leveraging knowledge on best practices,” Auclair says. “Look at the medical device space and say, ‘What are they doing with AI and what can we learn from them?’”

The AI2ET is meant to be the first step on a long road toward responsible AI and health regulations. It involves a science-based, risk-based approach to regulating AI in an industry where the stakes are life or death and shifts the focus from trying to regulate specific AI tools or drugs. Instead, the framework’s guiding principles tackle the broader systems and processes that underpin drug development and AI.

It’s a massive task that has to start somewhere simple: defining what it means to use AI in health care. The work that’s been done by government agencies is invaluable here, Auclair says. The FDA has been developing recommendations for the use of AI in health, specifically medical devices, since 2019. Those guidelines are “a nice start,” says Auclair, but they need to go further.

It will require widening the regulatory scope to address the whole lifecycle of drug development from discovery to commercialization. Importantly, it will also involve knowing when and when not to regulate.

“One of the risks that we run in drug development in general is that science moves way faster than regulation, and AI seems to be accelerating the normal process and has its own process on top of that, so it’s like warp speed,” Auclair says. “We need to be both cautious because we don’t want to kill people, but also we can’t be so cautious that we prevent good medicines from getting to patients.”

Auclair is well aware of the risks involved in health research. He’s spent most of his career studying Lou Gehrig’s disease, or amyotrophic lateral sclerosis (ALS), a terminal disorder that deteriorates motor neurons in the brain and spinal cord, resulting in gradual loss of muscle control. It has no cure and few, if any, effective drug treatments. 

The risk tolerance for using AI to discover or develop those drugs should be higher “because there’s nothing for those patients,” Auclair says. On the other hand, with a chronic disease like diabetes, which can be controlled reasonably well, the regulatory risk tolerance could be lower.

“What I would say is in the absence of regulations, we shouldn’t overregulate,” Auclair says. “That’s a bigger fear than under-regulating. If you overregulate, rolling that back is going to be next to impossible.”

As AI2ET seeks to unify AI and health regulations, handling work at that scale will probably require one of Auclair’s boldest recommendations: entirely new government agencies. An entity like the FDA is so used to the status quo, it’s likely to be less equipped to adapt to a rapidly changing technology, Auclair says.

“Trying to layer over this new stuff onto the status quo is going to, one, make a huge mess and, two, fundamentally increase the risk that we put patients at,” he says.