Skip to content

Can AI be used to help in suicide prevention? Northeastern researcher explores the technology’s use in the mental health space  

A digital image of a brain.
Researchers at Northeastern’s Institute for Experiential AI are exploring the role AI can play in mental health care. Getty Images

From diagnosing cancers to the development of new drugs, artificial intelligence is helping reshape health care in transformative ways. 

When it comes to mental health, AI tools have the potential to help treat more people in a sector that has struggled to find enough workers to meet demand, says Annika Marie Schoene, a research scientist at Northeastern University’s Institute for Experiential AI.   

As a researcher at the institute working in its Responsible AI team and AI+Health group, Schoene’s focus is to understand how companies are developing these tools, their shortcomings and their ethical implications.

Schoene’s area of focus has been AI’s use for suicide prevention. Users and developers of these technologies include social media companies, government and clinics, and startups in Silicon Valley and other tech hubs, she says.  

Headshot of Annika Marie Schoene.
Annika Marie Schoene, a researcher at the Institute for Experiential AI, has recently published research on the limits of AI detecting emotions. Courtesy Photo

One of the best known users of AI in suicide prevention is Meta, the parent company of Instagram and Facebook, she says. It uses machine learning to help detect posts that contain languages or images that might indicate someone intends to harm themselves.  

“This technology uses pattern-recognition signals, such as phrases and comments of concern, to identify possible distress,” the company says. Meta also uses AI to help content reviewers track and prioritize reported cases to provide them additional support from trained health care professionals and emergency services.  

However, as useful as these technologies can be, it’s important to recognize their limits and shortcomings, Schoene says.     

Earlier this year, she presented findings at the Society for Affective Science that highlighted one big limitation of the AI models — they struggle to detect emotion.  

For the study, the researchers — including Tomo Lazovich and Resmi Ramachandranpillai from the Institute for Experiential AI — wanted to see how AI could analyze datasets of nearly 4,000 tweets that had been annotated by humans for potential sucide-related content.

Each piece of content had been assigned a specific emotion, such as anger, disgust, fear, joy, neutral, sadness and surprise. 

To test the capabilities of current AI technologies, the researchers had three language models — a type of machine learning-based AI — assign an emotion for each tweet to see how their annotations compared to the human ones. 

The results were mixed across the board. The human annotators had found the majority of the tweets fell into the neutral emotional category, but that wasn’t the case for the three AI models. It was, in fact, challenging to find a consistent pattern among the models, though they found the models seemed to be more biased toward some emotional categories.   

“This led us to consider that language models for emotion predictions may not be capable of finding finer distinctions and granular sucided-related content,” Schoene says.

“Our findings also led us to question how useful and credible such techniques are when you want to use emotion features in suicide-detection tasks, especially given that this is a really high-risk scenario and a high positive rate could cause real harm,” she adds. 

The current technologies alone are not enough to be used to predict suicide, Schoene says, highlighting the importance of trained medical professionals.

But that’s not to say AI cannot be used at all, she says. It can be useful in helping health care professionals in “understanding the causes and factors of suicidal ideation and intent,” and in analyzing a lot of data at once.

“When you want information extraction or information summarization, AI can be very useful. No one would ever doubt that,” she says. “The important part here is that the ultimate decision-making should never be left to the algorithm.” 

The National Suicide Prevention Lifeline provides around-the-clock, free, and confidential support for people in distress, as well as prevention and crisis resources for you or your loved ones. It can be reached at 800-273-8255.