Featured
Sari Altschuler, a professor of English, argues that usage of AI scribes for clinical notes could improve the quality of care — but only if clinicians develop narrative and editing skills.
There is no doubt that artificial intelligence is already having a significant impact on many aspects of human life. One area where people may soon experience it directly is health care.
All of us go to the doctor, says Sari Altschuler, a professor of English and a founding director of the Health, Humanities and Society Program at Northeastern University. Before a doctor even gets to tests and diagnosis, he or she might use AI to help with a fundamental aspect of treating patients — writing clinical notes.
“If you haven’t already been to the doctor’s office and had somebody use their phone to take your notes, I think, it’s coming,” Altschuler says.
There are more than 90 different companies, she says, marketing their own AI medical scribes, or software applications that promise to free health care workers from taking notes and create more time to attend to patients’ concerns.
Altschuler and her co-authors argue in a recent issue of The Lancet, a peer-reviewed general medicine journal, that it is essential to consider now what the role of clinical notes is and how to use AI scribes responsibly for best patient outcomes.
“Skills trained by the humanities — not only narrative and editorial skills but history, ethics and observation — will be essential for addressing the radical changes to the nature of work ushered in by AI,” Altschuler says.
The use of these applications, Altschuler says, has already been shown to result in omissions, AI-hallucinated inaccuracies, racist wording, and the loss of implied information and nonverbal communication. Given these risks, clinicians need to be adequately trained to navigate the new technology.
Storytelling plays an important role in medicine, Altschuler says.
“Patients tell doctors their stories. Doctors tell stories to patients and about patients to each other,” she says. “Notes are a really important place where care happens, and they’re profoundly narrative objects.”
Note-taking is a part of the clinicians’ training process. While writing notes, Altschuler says, doctors process information shared by the patient and think about possible diagnosis.
What happens to the crucial thinking process baked into writing notes, if clinicians don’t write the notes themselves?
“I hope care will not become worse as a result of doctors spending less time on clinical documentation, but the risk is real,” Altschuler says.
Notes also ensure continuity of care and facilitate communication between doctors as well as provide legal protection and enable reimbursement by insurance companies for the costs of care.
As a result of a patient advocacy movement, patients now have open access to their charts via electronic patient portals. Scrutinizing notes can make patients feel heard and cared for or, on the contrary, not heard and not cared for, Altschuler says, if they are not satisfied with the way their accounts have been documented.
“That can really affect the way you think about what happened in the clinic, the way that you think about your relationship with your doctor,” she says. “A really careful and great note can make you feel confident about your care. However, if it’s done poorly, it can create all kinds of problems that didn’t exist before.”
There is some research, Altschuler says, that suggests that some doctors are omitting differential diagnosis from open notes or creating ghost charts with full medical records to avoid the problems caused by open access.
AI scribes produce whole transcriptions of doctor-patient conversations as well as summaries for medical charts.
But they do not create narratives from scratch — they generate notes using current recordings, past examples of medical notes, preference and ranking data provided by human annotators, large sets of internet training data and patterns encoded in the AI model. The model is trained to optimize features such as factuality rather than narrative.
“There are a lot of problems that we identify in the article that can happen,” Altschuler says.
One of her co-authors, Ian Huntington, a physician at the Codman Square Health Center in Boston, has experienced multiple mistakes made by an AI scribe. The app misgendered people and generated prejudiced and racist notes.
“There’s a long history of medical racism that gets encoded into medical technology because it is designed to mimic the culture that already exists,” Altschuler says.
Sometimes, AI hallucinates and makes up details or describes things that didn’t happen during the visit in the clinic.
Huntington has noticed that AI scribes can miss information implied from the long-term clinician–patient relationships and non-verbal communication.
There is also so-called automation bias, Altschuler says, when people accept texts produced by the AI technology more readily because they look superficially complete, among other things.
Another concern is that medical facilities and clinicians with less resources will end up using cheaper or free and more flawed scribe apps, which will compound care disparities that already exist.
AI is not good at recording and transcribing speech of people with heavy accents, especially immigrants, which means that those notes will be worse, Altschuler says, unless doctors build in extra time to edit AI-generated notes.
A whole other level of concern is about the privacy of patients.
“There are huge questions about how private that data is,” Altschuler says. “What’s actually happening behind the scenes?”
The companies that produce AI scribes promise users to anonymize shared information. However, there are ways to de-anonymize information, Altschusler says, or one can figure out exactly who the notes are about, if there’s enough clues.
When notes are produced by AI scribes, clinicians must think carefully about the shift in their roles from note authors to note editors, Altschuler says, to maximize a patient’s care.
The point of the Lancet essay, she says, is that a transition to automation of note-taking should be supported by training in editorial and narrative skills.
It’s one thing to edit something when you have years of practice, she says, but there is also the question of whether residents and students should get to use AI scribes or whether they should actually learn how to write the notes.
“Introducing AI scribes into the process of training too early will diminish some of the actual medical training that’s really important to having good clinicians,” Altschuler says.
She and her co-authors also propose that health care organizations assemble an advisory group of physicians, medical educators, narrative experts, editors and data studies scholars to make recommendations about when and how to use AI scribes.