Cansu
Canca
Associate Research Professor, Philosophy and Religion; Director of Responsible AI Practice
Expertise
Cansu Canca in the Press
Nearly Two Months After OpenAI Was Warned, ChatGPT Is Still Giving Dangerous Tips on Suicide to People in Distress
To be clear, that’s not even close to the only outstanding mental health issue with ChatGPT. Another recent experiment, this one by AI ethicists at Northeastern University, systematically examined the leading LLM chatbots’ potential to exacerbate users’ thoughts of self-harm or suicidal intent.
AI Chatbots Can Be Manipulated to Provide Advice on How to Self-Harm, New Study Shows
A new study from researchers at Northeastern University found that, when it comes to self-harm and suicide, large language models (LLMs) such as OpenAI’s ChatGPT and Perplexity AI may still output potentially harmful content despite safety features. (TIME reached out to both companies for comment.)
AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find
She contacted colleague Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern’s Institute for Experiential AI. Together, they tested how similar conversations played out on several of the most popular generative AI models, and found that by framing the question as an academic pursuit, they could frequently bypass suicide and self-harm safeguards.
The Unnerving Future of A.I.-Fueled Video Games
Cansu Canca, the director of responsible A.I. practice at Northeastern University in Boston, said there would be a risk to individual agency and privacy by normalizing the technology.


