When Roger Ebert lost his lower jaw—and, thus, his voice—to cancer, the text-to-speech company CereProc created a synthetic voice that would be custom-made for the film critic. The computerized voice, a fusion of the words Ebert had recorded in his long career, would not sound fully natural; it would, however, sound distinctive. It was meant to help Ebert regain something he had lost with the removal of his vocal cords: a voice of his own.

Most people are not so lucky. Those who have had strokes—or who live with ailments like Parkinson’s or cerebral palsy—often rely on versions of synthetic voices that are completely generic in their delivery. (Think of Stephen Hawking’s computerized monotone. Or of Alex, the voice of Apple’s VoiceOver software.) The good news is that these people are able to be heard; the bad news is that they have still been robbed of one of the most powerful things a voice can give us: a unique, and audible, identity.

Up in Boston, Rupal Patel is hoping to change that. She and her collaborator, Tim Bunnell of Nemours AI DuPont Hospital for Children, have for several years been developing algorithms that build voices for those unable to speak—without computer assistance. The voices aren’t just natural-sounding; they’re also unique. They’re vocal prosthetics, essentially, tailored to the existing voices (and, more generally, the identities) of their users.