Babies’ babbling enables new research tool by Angela Herring June 17, 2013 Share Facebook LinkedIn Twitter Thirty two years ago, computer and information science professor Harriet Fell had just given birth to her oldest child when Linda Ferrier, then a PhD student working on her thesis in speech language pathology at Boston Children’s Hospital came into her hospital room looking for infant volunteers. She was collecting recordings of infants “babbling” to better understand how they developed the capacity to produce speech over time. A young researcher herself, Fell thought this idea was great and signed her daughter up. For the next two and a half years, Ferrier visited their home each month to capture hour-long recordings of the young child cooing and chatting as babies do. After the thesis was complete, Ferrier receded into the family’s memory, until one day, when Fell was dropping Tova off at the Northeastern day care center. Opposite the day care she saw Ferrier’s name freshly minted on an office door. The doctoral student was now a professor at Fell’s own institution. The first project Harriet Fell and Linda Ferrier collaborated on was called the Baby Babble Blanket, which allowed infants to produce sounds—a mother’s voice, another baby babbling, a toilet flushing—by rolling around and pushing buttons on the blanket. Photo from Thinkstock. The two immediately struck up a working relationship, coming up with cool projects for their senior capstone students to collaborate over. “It was a lot of fun,” said Fell. “My students would build things, her students would test things, and everybody would make suggestions.” Over time the educational collaboration blossomed into a research partnership, with Fell designing complex software for novel speech research tools Ferrier—now retired—wanted to try out. Their first project together was called the Baby Babble Blanket, which allowed infants to produce sounds—a mother’s voice, another baby babbling, a toilet flushing—by rolling around and pushing buttons on the blanket. The tool was designed for infants with motor and neurological disorders to, as the researchers put it, “establish cause and effect skills, explore a babbling repertoire like normal infants, and use early motor movements to produce digitized sounds.” One child, Fell recalled, was particularly fond of the toilet flushing sound and would continually bop his head on the part of the blanket that produced that noise. After this, Ferrier and Fell continued to explore ways they could use computers to explore questions about speech. “My interest was trying to detect medical problems in their speech or sounds,” said Fell. “I just had the feeling that you could tell certain things about babies that could not easily be recognized. At the time there had been research on neonatal cry, where certain features could be recognized that indicate neurological problems. I thought that if the articulators aren’t working right, or maybe the parts of the brain that control speech, there’s a certain tension or disturbance, then maybe you could just tell this from the acoustic signal.” She developed a software program that analyzes the structure of syllables, looking for variety, duration, that kind of thing, in the sounds people create with their mouths. It could be used for adults just as well as with infants, she said. In one project, a colleague used the software to detect fatigue in adult speech. Ferrier used it to work as an accent reduction tool with foreign language speakers. Fell soon came to realize that what she had originally developed as a tool for very specific research questions could actually be quite useful for a variety of investigations. But back when this was all getting off the ground, computers were still the size of a broom closet and learning the software had a very high learning curve. In the last few decades, of course, the personal computer has pretty much taken over the world, so the constraints that once kept Fell from releasing her program to the masses have now vanished. About 15 years ago, she joined forces with an entrepreneur named Joel MacAuslan. A year ago they applied for and received a Small Business Innovation Research grant from the National Institutes of Health to develop the software they had been using for their own studies into something more useable and user friendly. Now a group of alpha testers is using the software to explore a variety of speech language questions, looking at everything from early vocal signatures of autism to the speed at which adults are able to cognitively respond to auditory stimuli. “Before our users were infants,” Fell said. “Now they are scientists.” What began as a collaboration between friends has clearly grown into a research tool that has the potential to enable a host of new investigations. The only limit now is the scientists’ imaginations.