The science of language by Angela Herring April 30, 2012 Share Facebook LinkedIn Twitter I’ve written a few times about language acquisition, but I’m pretty sure this will be the first time I write about linguistics. “My research questions are based on syntax which is word structure and ordering and how we put words together to create sentences,” said assistant academic specialist Heather Littlefield of the College of Science. “So what I do is use what we know about acquisition to test syntactic hypotheses or frameworks.” For her PhD work, Littlefield developed and tested one of these frameworks, which essentially redefined the way we think about words. And when I say “we think about,” I really mean a subset of the population that really does not include me. I mean the linguists. These people are of a different breed. They understand the minutest aspects of the words that spill from our mouths. They map sentences in a way that makes sentences look completely foreign to me. We average Joes use language to communicate; linguists use it to understand how the brain works, how societies function and how humans develop, to name just a few research areas. So, what did Heather Littlefield do that was so impressive? Okay, hold onto your hats and I’ll try to tell you. A new framework The story starts like this: when we’re kids, we acquire different parts of a language at different time points. Things that have concrete meaning are easier to grasp — things like mom, ball, bottle, etc., you know, the important stuff. After that we start to understand the syntactical glue that holds those meaningful words together. “The,” for example. Or… “or.” Traditionally, these two categories were it. Words like ball, mom and bottle are called “lexical” and the rest are called “functional.” Every word we speak was thought to fit into one category or the other. But then, in the early 2000s, linguists started to realize that some things fell into both categories. They have content, Littlefield said, but they also do something functional at the same time. So people started called these bizaro words “semi-lexical.” These are words like “on,” “up,” and “under.” But this bugged Littlefield. It wasn’t that simple. “If functional is defining syntax and lexical is defining semantics, then to say that lexical is the opposite of functional doesn’t really work, because they’re really two different things,” she explained. Semi-lexical wasn’t a sufficient term. So she set out to organize the whole system more accurately, in hopes of better understanding the bizaro words. She said that instead of being relegated to categories, words should be qualified by their features. Functional should be one feature and lexical another. Then words could have some or none of either feature. Traditionally functional words would have all the functional qualities and none of the lexical qualities. Traditionally lexical words would have all the lexical qualities and none of the functional ones. It started to take shape in her mind (See the box over there? That was what the shape in her mind looked like). This structure allows for two new “categories,” if you really want to call ’em that: Words that have both functional and lexical qualities (semi-functional) and words that have neither. Whoa…wait a second. Words that have neither quality? How does that work? These are idiomatic words, Littlefield said. And yes, they show up in idiomatic expressions. The example Littlefield gave me were the words in the sentence “He kicked the bucket.” There’s no bucket. There’s no kicking (the guy is dead after all). And you can’t change any of the other words (“He kicked a bucket” doesn’t mean the same thing). “You have to take the phrase out of the ‘word stock’ and then plant it into the syntax and treat it as a unit. In that sense, The individual members of the phrase are not lexical and they’re not functional.” Acquisition as analysis So, this is the framework or model that Littlefield set up as a grad student and then set out to test. First she got down and dirty with prepositions. In order to determine if the model is valid, she looked at the timeline in which kids acquire words from each of the quadrants. Turns out she was exactly right. First, we start to learn the -F/+L words, traditionally called “lexical.” In the case of prepositions, these are words like down and up: throw down the cup, pull up your socks, throw up (meaning to toss in the air). Next we learn the idiomatic words, which have neither functional nor lexical qualities. In the following sentences, “up” doesn’t really mean up: look up the number, blow up (meaning to explode), throw up (meaning to vomit). After that we pick up the semi-lexical words, which have both functional and lexical qualities. (Run to the store, wait in line, the dog near/under/on/in the bed). Finally we learn the functional words, which can’t purport to have any kind of tangible meaning at all. Some functional prepositions? Translation of the book; piece of paper. This pattern makes sense, if you think about it, Littlefield said. Words that have even a hint of spacial context (like up or under) are a lot easier to define and thus understand than words that are purely “glue.” The next steps Okay….so, all of this work was done just with prepositions. Going forward, Littlefield is putting her undergraduate students to work looking at other types of words (verbs are particularly exciting to one of her students, and he actually found that there’s a couple more features in that scenario). Another of her students is looking at how these words are all acquired by second-language learners. When we’re learning a foreign language, do we acquire words in the same categorical pattern as when we’re kids learning our first language? I hope to write about these projects another time, but this post is getting long(I think that’s a semi-functional word) so I’m calling it quits. Photo: Kheel Center, Cornell University, “Herbert Lehman giving a speech” October 1, 2010 via Flickr. Creative Commons attribution.