A biological engine for human language

For more than a decade, Northeastern psychology professor Iris Berent has focused her research on one central question: What makes human language so special? So far, she’s  addressed that question by conducting experiments on speakers of languages as diverse as Hebrew and American Sign Language.

Through this research, she’s uncovered some surprising things. For instance, her work has shown that regardless of our mother tongue, we prefer certain linguistic structures to others. Despite significant differences between languages as unrelated as Korean and Spanish, all of them seem to share the same set of unwritten rules that dictate how sounds can be arranged to form words.

In a study published in 2007, her team showed that all spoken languages favor certain syllables. For instance, syllables such as “lbif” are much less common across languages than syllables such as “blif.” A later study showed that people are sensitive to this rule even if neither of those syllables occurs in their language.

Still, a question has nagged at the back of Berent’s mind: Are these rules biologically hardwired into the brain? As she put it, “Is there any engine in the brain that cares about this thing and, if so, then what kind of engine is it?”

New research from Berent’s lab, published Thursday in the journal PLOS ONE, suggests that’s exactly the case. For the first time, Berent’s team looked at brain activity while asking participants to listen to nonsense words that are representative of this hierarchy of syllables that occur across human languages.

The research follows on the heels of a paper co-authored by Berent and published earlier this month in the journal Proceedings of the National Academy of Science that examined the brains of newborn babies. The same trend appeared in that research as well, suggesting that the human brain forms languages based on an innate set of linguistic rules.

In the PLOS study, conducted in collaboration with researchers at Harvard Medical School, English speakers heard various types of syllables, most of which don’t occur in their native language. But they occur—at varying frequencies—in other languages. Some, such as bnif, are common across languages; others, such as bdif, are less common, and a select few, such as ibif, are outright rare.

Participants hooked up to brain imaging machines heard these syllables along with similar nonsense words with two syllables—for instance, benif, bedif, and lebif—and were asked to indicate whether the “word” they had heard included one syllable or two.

The results showed that as the syllable became less frequent across languages, English speakers had greater difficulty identifying it. Remarkably, so did their brains. Syllables that are “worse” along the hierarchy—lbif, for example—caused the brain to work harder than “better” syllables such as bnif.

Prior to these results, Berent noted, several hypotheses could have explained the behavioral data, including exclusively-auditory (“lbif is hard to hear”) or exclusively-memory based explanations (“lbif is not similar to any English words”). If those hypotheses were correct, they would have resulted in activation only in the parts of the brain that are responsible for audition or word memories, respectively.

The new research shows activity in a brain region called Broca’s area. While this region is not exclusively dedicated to linguistic activity, it is central to those processes. “If there were linguistic universals in the brain,” Berent said, “this is exactly what it would look like.”