If you’re not a linguist, big deal! (We have cooties and are into weird stuff anyways)

Last week I wrote a post called “If you’re not a linguist, don’t do linguistics”. This got shared around Twitter quite a bit and made it to the front page of r/linguistics, so a lot of people saw it. Pretty much everyone had good insight on the topic and it generated some great discussion. I thought it would be good to write a follow-up to flesh out my main concerns in a more serious manner (this time sans emoticons!) and to address the concerns some people had with my reasoning.

The paper in question is by Dodds et al. (2015) and it is called “Human language reveals a universal positivity bias”. The certainty of that title is important since I’m going to try to show in this post that the authors make too many assumptions to reliably make any claims about all human language. I’m going to focus on the English data because that is what I am familiar with. But if anyone who is familiar with the data in other languages would like to weigh in, please do so in the comments.

The first assumption made by the authors is that it is possible to make universal claims about language using only written data. This is not a minor issue. The differences between spoken and written language are many and major (Linell 2005). But dealing with spoken data is difficult – it takes much more time and effort to collect and analyze than written data. We can argue, however, that even in highly literate societies, the majority of language use is spoken – and spoken language does not work like written language. This is an assumption that no scholar should ever make. So any research which makes claims about all human language will therefore have to include some form of spoken data. But the data set that the authors draw from (called their corpus) is made from tweets, song lyrics, New York Times articles and the Google Books project. Tweets and song lyrics, let alone news articles or books, do not mimic spoken language in an accurate way. For example, these registers may include the same words as human speech, but certainly not in the same proportion. Written language does not include false starts, nor does it include repetition or elusion in near the same way that spoken language does. Anyone who has done any transcription work will tell you this.

The next assumption made by the authors is that their data is representative of all human language. Representativeness is a major issue in corpus linguistics. When linguists want to investigate a register or variety of language, they build a corpus which is representative of that register or variety by taking a large enough and balanced sample of texts from that register. What is important here, however, is that most linguists do not have a problem with a set of data representing a larger register – so long as that larger register isn’t all human language. For example, if we wanted to research modern English journalism (quite a large register), we would build a corpus of journalism texts from English-speaking countries and we would be careful to include various kinds of journalism – op-eds, sports reporting, financial news, etc. We would not build a corpus of articles from the Podunk Free Press and make claims about all English journalism. But representativeness is a tricky issue. The larger the language variety you are trying to investigate, the more data from that variety you will need in your corpus. Baker (2010: 7) notes that a corpus analysis of one novel is “unlikely to be representative of all language use, or all novels, or even the general writing style of that author”. The English sub-corpora in Dodds et al. exists somewhere in between a fully non-representative corpus of English (one novel) and a fully representative corpus of English (all human speech and writing in English). In fact, in another paper (Dodds et al. 2011), the representativeness of the Twitter corpus is explained as “First, in terms of basic sampling, tweets allocated to data feeds by Twitter were effectively chosen at random from all tweets. Our observation of this apparent absence of bias in no way dismisses the far stronger issue that the full collection of tweets is a non-uniform subsampling of all utterances made by a non-representative subpopulation of all people. While the demographic profile of individual Twitter users does not match that of, say, the United States, where the majority of users currently reside, our interest is in finding suggestions of universal patterns.”. What I think that doozy of a sentence in the middle is saying is that the tweets come from an unrepresentative sample of the population but that the language in them may be suggestive of universal English usage. Does that mean can we assume that the English sub-corpora (specifically the Twitter data) in Dodds et al. is representative of all human communication in English?

Another assumption the authors make is that they have sampled their data correctly. The decisions on what texts will be sampled, as Tognini-Bonelli (2001: 59) points out, “will have a direct effect on the insights yielded by the corpus”. Following Biber (see Tognini-Bonelli 2001: 59), linguists can classify texts into various channels in order to assure that their sample texts will be representative of a certain population of people and/or variety of language. They can start with general “channels” of the language (written texts, spoken data, scripted data, electronic communication) and move on to whether the language is private or published. Linguists can then sample language based on what type of person created it (their age, sex, gender, social-economic situation, etc.). For example, if we made a corpus of the English articles on Wikipedia, we would have a massive amount of linguistic data. Literally billions of words. But 87% of it will have been written by men and 59% of it will have been written by people under the age of 40. Would you feel comfortable making claims about all human language based on that data? How about just all English language encyclopedias?

The next assumption made by the authors is that the relative positive or negative nature of the words in a text are indicative of how positive that text is. But words can have various and sometimes even opposing meanings. Texts are also likely to contain words that are written the same but have different meanings. For example, the word fine in the Dodds et al. corpus, like the rest of the words in the corpus, is just a four letter word – free of context and naked as a jaybird. Is it an adjective that means “good, acceptable, or satisfactory”, which Merriam-Webster says is sometimes “used in an ironic way to refer to things that are not good or acceptable”? Or does it refer to that little piece of paper that the Philadelphia Parking Authority is so (in)famous for? We don’t know. All we know is that it has been rated 6.74 on the positivity scale by the respondents in Dodds et al. Can we assume that all the uses of fine in the New York Times are that positive? Can we assume that the use of fine on Twitter is always or even mostly non-ironic? On top of that, some of the most common words in English also tend to have the most meanings. There are 15 entries for get in the Macmillan Dictionary, including “kill/attack/punish” and “annoy”. Get in Dodds et al. is ranked on the positive side of things at 5.92. Can we assume that this rating carries across all the uses of get in the corpus? The authors found approximately 230 million unique “words” in their Twitter corpus (they counted all forms of a word separately, so banana, bananas, b-a-n-a-n-a-s! would be separate “words”; and they counted URLs as words). So they used the 50,000 most frequent ones to estimate the information content of texts. Can we assume that it is possible to make an accurate claim about how positive or negative a text is based on nothing but the words taken out of context?

Another assumption that the authors make is that the respondents in their survey can speak for the entire population. The authors used Amazon’s Mechanical Turk to crowdsource evaluations for the words in their sub-corpus. 60% of the American people on Mechanical Turk are women and 83.5% of them are white. The authors used respondents located in the United States and India. Can we assume that these respondents have opinions about the words in the corpus that are representative of the entire population of English speakers? Here are the ratings for the various ways of writing laughter in the authors’ corpus:

Laughter tokens Rating
ha 6
hah 5.92
haha 7.64
hahah 7.3
hahaha 7.94
hahahah 7.24
hahahaha 7.86
hahahahaha 7.7
ha 6
hee 5.4
heh 5.98
hehe 6.48
hehehe 7.06

And here is a picture of a character expressing laughter:

Pictured: Good times. Credit: Batman #36, DC Comics, Scott Snyder (wr), Greg Capullo (p), Danny Miki (i), Fco Plascenia (c), Steve Wands (l).
Pictured: Good times. Credit: Batman #36, DC Comics, Scott Snyder (wr), Greg Capullo (p), Danny Miki (i), Fco Plascenia (c), Steve Wands (l).

Can we assume that the textual representation of laughter is always as positive as the respondents rated it? Can we assume that everyone or most people on Twitter use the various textual representations of laughter in a positive way – that they are laughing with someone and not at someone?
Finally, let’s compare some data. The good people at the Corpus of Contemporary American English (COCA) have created a word list based on their 450 million word corpus. The COCA corpus is specifically designed to be large and balanced (although the problem of dealing with spoken language might still remain). In addition, each word in their corpus is annotated for its part of speech, so they can recognize when a word like state is either a verb or a noun. This last point is something that Dodds et al. did not do – all forms of words that are spelled the same are collapsed into being one word. The compilers of the COCA list note that “there are more than 140 words that occur both as a noun and as a verb at least 10,000 times in COCA”. This is the type/token issue that came up in my previous post. A corpus that tags each word for its part of speech can tell the difference between different types of the “same” word (state as a verb vs. state as a noun), while an untagged corpus treats all occurrences of state as the same token. If we compare the 10,000 most common words in Dodds et al. to a sample of the 10,000 most common words in COCA, we see that there are 121 words on the COCA list but not the Dodds et al. list (Here is the spreadsheet from the Dodds et al. paper with the COCA data – pnas.1411678112.sd01 – Dodds et al corpus with COCA). And that’s just a sample of the COCA list. How many more differences would there be if we compared the Dodds et al. list to the whole COCA list?

To sum up, the authors use their corpus of tweets, New York Times articles, song lyrics and books and ask us to assume (1) that they can make universal claims about language despite using only written data; (2) that their data is representative of all human language despite including only four registers; (3) that they have sampled their data correctly despite not knowing what types of people created the linguistic data and only including certain channels of published language; (4) that the relative positive or negative nature of the words in a text are indicative of how positive that text is despite the obvious fact that words can be spelled the same and still have wildly different meanings; (5) that the respondents in their survey can speak for the entire population despite the English-speaking respondents being from only two subsets of two English-speaking populations (USA and India); and (6) that their list of the 10,000 most common words in their corpus (which they used to rate all human language) is representative despite being uncomfortably dissimilar to a well-balanced list that can differentiate between different types of words.

I don’t mean to sound like a Negative Nancy and I don’t want to trivialize the work of the authors in this paper. The corpus that they have built is nothing short of amazing. The amount of feedback they got from human respondents on language is also impressive (to say the least). I am merely trying to point out what we can and can not say based on the data. It would be nice to make universal claims about all human language, but the fact is that even with millions and billions of data points, we still are not able to do so unless the data is representative and sampled correctly. That means it has to include spoken data (preferably a lot of it) and it has to be sampled from all socio-economic human backgrounds.

Hat tip to the commenters on the last post and the redditors over at r/linguistics.

References

Dodds, Peter Sheridan, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, and Christopher M. Danforth. 2015. “Human language reveals a universal positivity bias”. PNAS 112:8. http://www.pnas.org/content/112/8/2389

Dodds, Peter Sheridan, Kameron Decker Harris, Isabel M. Koumann, Catherine A. Bliss, Christopher M. Danforth. 2011. “Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter”. PLOS One. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026752#abstract0

Baker, Paul. 2010. Sociolinguistics and Corpus Linguistics. Edinburgh: Edinburgh University Press. http://www.ling.lancs.ac.uk/staff/paulb/socioling.htm

Linell, Per. 2005. The Written Language Bias in Linguistics. Oxon: Routledge.

Mair, Christian. 2015. “Responses to Davies and Fuchs”. English World-Wide 36:1, 29–33. doi: 10.1075/eww.36.1.02mai

Tognini-Bonelli, Elena. 2001. Studies in Corpus Linguistics, Volume 6: Corpus Linguistics as Work. John Benjamins. https://benjamins.com/#catalog/books/scl.6/main

Advertisements

If you’re not a linguist, don’t do linguistic research

A paper recently published in PNAS claims that human language tends to be positive. This was news enough to make the New York Times. But there are a few fundamental problems with the paper.

Linguistics – Now with less linguists!

The first thing you might notice about the paper is that it was written by mathematicians and computer scientists. I can understand the temptation to research and report on language. We all use it and we feel like masters of it. But that’s what makes language a tricky thing. You never hear people complain about math when they only have a high-school-level education in the subject. The “authorities” on language, however, are legion. My body has, like, a bunch of cells in it, but you don’t see me writing papers on biology. So it’s not surprising that the authors of this paper make some pretty basic errors in doing linguistic research. They should have been caught by the reviewers, but they weren’t. And the editor is a professor of demography and statistics, so that doesn’t help.

Too many claims and not enough data

The article is titled “Human language reveals a universal positivity bias” but what the authors really mean is “10 varieties of languages might reveal something about the human condition if we had more data”. That’s because the authors studied data in 10 different languages and they are making claims about ALL human languages. You can’t do that. There are some 6,000 languages in the world. If you’re going to make a claim about how every language works, you’re going to have to do a lot more than look at only 10 of them. Linguists know this, mathematicians apparently do not.

On top of that, the authors don’t even look at that much linguistic data. They extracted 5,000–10,000 of the most common words from larger corpora. Their combined corpora contain the 100,000 most common words in each of their sub-corpora. That is woefully inadequate. The Brown corpus contains 1 million words and it was made in the 1960s. In this paper, the authors claim that 20,000 words are representative of English. That is, not 20,000 different words, but the 5,000 most common words in each of their English sub-corpora. So 5,000 words each from Twitter, the New York Times, music lyrics, and the Google Books Project are supposed to represent the entire English language. This is shocking… to a linguist. Not so much to mathematicians, who don’t do linguistic research. It’s pretty frustrating, but this paper is a whole lotta ¯\_(ツ)_/¯.

To complete the trifecta of missing linguistic data, take a look at the sources for the English corpora:

Corpus Word count
English: Twitter 5,000
English: Google Books Project 5,000
English: The New York Times 5,000
English: Music lyrics 5,000

If you want to make a general claim about a language, you need to have data that is representative of that language. 5,000 words from Twitter, the New York Times, some books and music lyrics does not cut it. There are hundreds of other ways that language is used, such as recipes, academic writing, blogging, magazines, advertising, student essays, and stereo instructions. Linguists use the terms register and genre to refer to these and they know that you need more than four if you want your data to be representative of the language as a whole. I’m not even going to ask why the authors didn’t make use of publicly available corpora (such as COCA for English). Maybe they didn’t know about them. ¯\_(ツ)_/¯

Say what?

Speaking of registers, the overwhelmingly most common way that language is used is speech. Humans talking to other humans. No matter how many written texts you have, your analysis of ALL HUMAN LANGUAGE is not going to be complete until you address spoken language. But studying speech is difficult, especially if you’re not a linguist, so… ¯\_(ツ)_/¯

The fact of the matter is that you simply cannot make a sweeping claim about human language without studying human speech. It’s like doing math without the numeral 0. It doesn’t work. There are various ways to go about analyzing human speech, and there are ways of including spoken data into your materials in order to make claims about a language. But to not perform any kind of analysis of spoken data in an article about Language is incredibly disingenuous.

Same same but different

The authors claim their data set includes “global coverage of linguistically and culturally diverse languages” but that isn’t really true. Of the 10 languages that they analyze, 6 are Indo-European (English, Portuguese, Russian, German, Spanish, and French). Besides, what does “diverse” mean? We’re not told. And how are the cultures diverse? Because they speak different languages and/or because they live in different parts of the world? ¯\_(ツ)_/¯

The authors also had native speakers judge how positive, negative or neutral each word in their data set was. A word like “happy” would presumably be given the most positive rating, while a word like “frown” would be on the negative end of the scale, and a word like “the” would be rated neutral (neither positive nor negative). The people ranking the words, however, were “restricted to certain regions or countries”. So, not only are 14,000 words supposed to represent the entire Portuguese language, but residents of Brazil are rating them and therefore supposed to be representative of all Portuguese speakers. Or, perhaps that should be residents of Brazil with internet access.

[Update 2, March 2: In the following paragraph, I made some mistakes. I should not have said that ALL linguists believe that rating language is an notoriously poor way of doing an analysis. Obviously I can’t speak for all the linguists everywhere. That would be overgeneralizing, which is kind of what I’m criticizing the original paper for. Oops! :O I also shouldn’t have tied the rating used in the paper and tied it to grammaticality judgments. Grammaticality judgments have been shown to be very, very consistent for English sentences. I am not aware of whether people tend to be as consistent when rating words for how positive, negative, or neutral they are (but if you are, feel free to post in the comments). So I think the criticism still stands. Some say that the 384 English-speaking participants is more than enough to rate a word’s positivity. If people rate words as consistently as they do sentences, then this is true. I’m not as convinced that people do that (until I see some research on it), but I’ll revoke my claim anyway. Either way, the point still stands – the positivity of language does not lie in the relative positive or negative nature of the words in a text (the next point I make below). Thanks to u/rusoved, u/EvM and u/noahpoah on reddit for pointing this out to me.] There are a couple of problems with this, but the main one is that having people rate language is a notoriously poor way of analyzing language (notorious to linguists, that is). If you ask ten people to rate the grammaticality of a sentence on a scale from 1 to 10, you will get ten different answers. I understand that the authors are taking averages of the answers their participants gave, but they only had 384 participants rating the English words. I wouldn’t call that representative of the language. The number of participants for the other languages goes down from there.

A loss for words

A further complication with this article is in how it rates the relative positive nature of words rather than sentences. Obviously words have meaning, but they are not really how humans communicate. Consider the sentence Happiness is a warm gun. Two of the words in that sentence are positive (happiness and warm), while only one is negative (gun). This does not mean it’s a positive sentence. That depends on your view of guns (and possibly Beatles songs). So it is potentially problematic to look at how positive or negative the words in a text are and then say that the text as a whole (or the corpus) presents a positive view of things.

Lost in Google’s Translation

The last problem I’ll mention concerns the authors’ use of Google Translate. They write

We now examine how individual words themselves vary in their average happiness score between languages. Owing to the scale of out corpora, we were compelled to use an online service, choosing Google Translate. For each of the 45 language pairs, we translated isolated words from one language to the other and then back. We then found all word pairs that (i) were translationally stable, meaning the forward and back translation returns the original word, and (ii) appeared in our corpora in each language.

This is ridiculous. As good as Google Translate may be in helping you understand a menu in another country, it is not a good translator. Asya Pereltsvaig writes that “Google Translate/Conversation do not translate. They match. More specifically, they match (bits of) the original text with best translations, where ‘best’ means most frequently found in a large corpus such as the World Wide Web.” And she has caught Google Translate using English as an intermediate language when translating from one language to another. That means that when going between two languages that are not English (say French and Russian), Google Translate will first translate the word into English and then into target language. This represents a methodological problem for the article in that using the online Google Translate actually makes their analysis untrustworthy.

 

It’s unfortunate that this paper made it through to publication and it’s a shame that it was (positively) reported on by the New York Times. The paper should either be heavily edited or withdrawn. I’m doubtful that will happen.

 

Update: In the fourth paragraph of this post (the one which starts “On top of that…”), there was some type/token confusion concerning the corpora analyzed. I’ve made some minor edits to it to clear things up. Hat tip to Ben Zimmer on Twitter for pointing this out to me.

Update (March 17, 2015): I wrote a more detailed post (more references, less emoticons) on my problems with the article in question. You can find that here.

Book review: Cross-cultural Pragmatics by Anna Wierzbicka

If you study linguistics, you will probably come across Anna Wierzbicka’s Cross-Cultural Pragmatics, perhaps as an undergrad, but definitely if you go into the fields of pragmatics or semantics. It’s a seminal work for reasons I will get into soon. The problem is that most of the data used to draw the conclusions are oversimplifications. This review is written for people who encounter this book in their early, impressionable semesters.

What’s it all about?

With Cross-cultural pragmatics, Wierzbicka was able to change the field of pragmatics for the better. Her basic argument runs like this: the previous “universal” rules of politeness that govern speech acts are wrong. The rules behind speech acts should instead be formulated in terms of cultural-specific conversational strategies. Also, the mechanisms of speech acts are culture-specific, meaning that they reflect the norms and assumptions of a culture. Wierzbicka argues that language-specific norms of interaction should be linked to specific cultural values.

At the time Cross-cultural pragmatics was written, this needed to be said. There was more involved in speech acts than scholars were acknowledging. And the explanations used for speech acts in English were not entirely appropriate to explain speech acts in other languages or even other English-speaking cultures, although they were being used to. So Wierzbicka gets credit for helping to advance the field of linguistics.

So what’s wrong with that?

The problem I have with this book is that Wierzbicka lays out a research method designed to avoid oversimplifications, but then oversimplifies her data to reach conclusions. Wierzbicka’s method in Cross-cultural pragmatics is what can be seen as a step in the development of semantic primes, which aims to explain all of the words in a language using a set of terms or concepts (do, say, want, etc.) that can not be simplified, their meanings being innately understood and their existence being cross-cultural.

For example, Wierzbicka analyzes self-assertion in Japanese and English. She says that Japanese speakers DO NOT say “I want/think/like X”, while English speakers DO. She then translates the Japanese term enryo (restraint) like this:

X thinks: I can’t say “I want/think/like this” or “I don’t want/think/like this”
   Someone could feel bad because of this
X doesn’t say it because of this
X doesn’t do some things because of this

This is all fine and good, but you can probably see how such an analysis has the potential to unravel. Just taking polysemy and context into account means that each and every term must be thoroughly explained using the above system.

But whatever. Let’s just say that it’s possible to do so. Semantic primes are still discussed in academia and I’m not here to debate their usefulness. What I want to talk about is how Wierzbicka oversimplifies the language and cultures that she compares. Although there are many examples to choose from, I’ll only list a few that come in quick succession.

cross-cultural pragmatics - wierzbicka

Those manly Aussies

In describing Australian culture, Wierzbicka says that “Shouting is a specifically Australian concept” (173). And yet she doesn’t explain how it is any different from buying a round or why this concept is “specifically Australian” She then describes the Australian term dob in but does not tell us how it differs from snitch. Finally, she notes that the Australians use the term whinge an awful lot. Whinge is used to bolster Wierzbicka’s claim that Australians value “tough masculinity, gameness, and resilience” and that they refer to British people as whingers .

First of all, how Wierzbicka misses the obviously similarities between whinging and whining is beyond me. She instead compares whinge to complain. Second, British people refer to other British people as “whingers”, so how exactly is whinge “marginal” in “other parts of the English-speaking world”? (180) Finally, wouldn’t using a negative term like whinge show more about the strained relations between the Australians and British than it would about any sort of heightened “masculine” Australian identity? Does stunad prove that Italian-Americans have a particular or peculiar dislike of morons compared to other cultures?

We should have used a corpus

In other parts of Cross-cultural pragmatics, Wierzbicka seems to be cherry-picking the speech acts that she uses to evaluate the norms and values of the cultures she compares. This can be seen from the following passage on the differences between (white) Anglo-American culture and Jewish or black American culture:

The expansion of such expressions [Nice to have met you, Lovely to see you, etc.] fits in logically with the modern Anglo-American constraints on direct confrontation, direct clashes, direct criticisms, direct ‘personal remarks’ – features which are allowed and promoted in other cultures, for example, in Jewish culture or in Black American culture, in the interest of cultural values such as ‘closeness’, ‘sponteneity’, ‘animation’, or ‘emotional intensity’, which are given in these cultures priority over ‘social harmony’.
This is why, for example, one doesn’t say freely in (white) English, ‘You are wrong’, as one does in Hebrew or ‘You’re crazy’, as one does in Black English. Of course some ‘Anglos’ do say fairly freely things like Rubbish! or even Bullshit!. In particular, Bullshit! (as well as You bastard!) is widely used in conversational Australian English. Phrases of this kind, however, derive their social force and their popularity partly from the sense that one is violating a social constraint. In using phrases of this kind, the speaker defies a social constraint, and exploits it for an expressive purpose, indirectly, therefore, he (sometimes, she) acknowledges the existence of this constraint in the society at large. (pp. 118–9)

Do we know whites Anglo-Americans don’t say “You are wrong” or that they say it less than Jewish people? I heard a white person say it today, but that is just anecdotal evidence. Obviously, large representative corpora were not around to consult when Wierzbicka wrote Cross-cultural pragmatics, but it would be nice to see at least some empirical data points. Instead we’re left with just the assertion that black Americans” “You’re crazy” and Anglo-Americans” “Bullshit!” are not equal, which to me is confusing and misguided. Also, aren’t black people violating a social norm by saying “you’re crazy”?

Wierzbicka’s inability to consult a corpus (because there wasn’t one available at the time, granted) is why I am not consulting one right now, but just off the top of my head, I can think of other (common) expressions from both cultures that would say the exact opposite of what Wierzbicka claims. For example, as Pryor (1979) pointed out, whites have been known to say things like “Cut the shit!” How is this different from Black English’s “You’re crazy!”?

This leads me to the final major problem I have with Cross-cultural pragmatics: While classifications of speech acts based on “directness,” etc. were insufficient for the reasons that Wierzbicka points out, her classifications suffer from not being able to group similar constructions together, which is one of the goal in describing a large system such as language. They are too simplistic and specific to each construction. There are always certain constructions that don’t fit the mold that Wierzbicka lays out, which seems to me a similar problem to the one she’s trying to solve. So the problem gets shifted instead of solved.

Still, I think Wierzbicka was justified in changing the ways that researchers talked about speech acts. I also think she was right in shattering the Anglo-American and English language bias which was prevalent at the time. It’s those points that make Cross-cultural pragmatics an important work. The lack of empirical data and the over-generalizations are unfortunate, but so are lots of other things. Welcome to academia, folks.

 

 

 

Up next: Superman: The High-Flying History of America’s Most Enduring Hero by Larry Tye

Meta Book Review: Reviews of Sampson’s The Language Instinct Debate

When I last left you*, we had just talked about how Geoffrey Sampson’s The Language Instinct Debate is a remarkable take-down of Steven Pinker’s The Language Instinct and the nativist argument, or the idea that language is genetic. I came down pretty hard on the nativists, who I termed “Chomskers” (CHOMsky + PinKER + otherS) and rightly so since their theory amounts to a bunch of smoke and mirrors. For this post, I’m going to review the reviews of Sampson’s book. It’ll be like what scholars call a meta-analysis, except nowhere near as lengthy or peer-reviewed. For the absence of those, I promise more swear words. For those just joining us, here are my reviews of Pinker’s The Language Instinct and Sampson’s The Language Instinct Debate, the first two parts of this three-part series of posts. If you’re new to the subject matter (linguistic nativism), they’ll help you understand what this post is all about. If you already know all about Universal Grammar (and have read my totally bitchin’ reviews of the aforementioned books), then let’s get on with the show.

I know you are, but what am I?

Victor M. Longa’s review of The Language Instinct Debate

Longa’s review would be impressive if it wasn’t written in classic Chomskers’ style. He seems to address Sampson’s book in a thoughtful and step-by-step process, but his arguments boil down to nothing but “Sampson’s wrong because language is innate.” I know this sounds bad, but it’s the truth. A good example of Longa’s typical nativist style can be found here:

To sum up, S[ampson] tries, with difficulty, to explain the convergence between different languages by resorting only to the cultural nature of language. (Longa 1999: 338)

The disregard for other explanations is something to expect from the linguistic nativists. “You’re not considering that language is innate!” they protest. But innateness is all they consider. We must remember that linguistic nativism (or UG) is the unfalsifiable hypothesis. Any attempts to engage the theory in a logical way, such as Sampson has done, should be praised because of how much harm the proponents of the Universal Grammar Hypothesis (UGH) have done to the field of linguistics.

The belief that language is innate has become something more than an assumption to the nativists. This can be seen from Longa’s conclusion:

What is more, as I pointed out at the beginning of the paper, from the common-sense point of view, it is perfectly possible to conceive of a capacity such as language having been fixed in our species as a genetic endowment… (Longa 1999: 340)

It’s common-sense, godammit! What’s wrong with you people?! Why can’t everyone just see that something we have no evidence for is real? How many times do we have to say it? Language is innate. Never mind that it’s perfectly possible to conceive of just about anything (it’s called, you know, imagination), or that the arguments for linguistic nativism fall down easier than a elephant on ice skates, just trust us when we say that language is innate. OK?

Longa goes on about the innateness of language:

To deny this possibility a priori, claiming that is sounds almost mad, suggests a biased perspective that has little to offer to the scientific study of language.

Know what else has little to offer the scientific study of language (or the scientific study of anything, for that matter)? Unfalsifiable theories. That’s why linguistic nativism has been denied. Scientific hypotheses are accepted only so long as they stand up to the tests meant to falsify them. But first (and I can’t stress this enough) they have to falsifiable or they’re not scientific theories. Linguistic nativism has been considered for so long only because Chomskers won’t stop writing bullshit books about it and forcing it down students’ throats. My fellow budding scholars who had to write about UGH, I feel for you.

Longa’s review is followed by a reply from Sampson, which offers a simple way to see how unfalsifiable nativism is. Sampson quite rightly points out that the speed-of-acquisition argument made by Chomskers, which says that language is innate because children learn language remarkably fast, is ridiculous because Chomskers have never claimed how long it should take children to learn language in the absence of an innate UGH. They just say it’s innate and that kids learn language, like, really fast bro, and we’re supposed to take these claims as common-sense truth. This is par for the nativist course.

What he said

Stephen J. Cowley’s review of both books

Cowley review of both Pinker’s The Language Instinct and Sampson’s The Language Instinct Debate is a wonderful read and I want to quote the whole damn thing. While Cowley agrees that Sampson successfully refutes linguistic nativism, and that Pinker’s argument is akin to “saying that, because angels exist, miracles happen” (75), he rejects Sampson’s alternative to the origin of language, a topic I have not addressed in these reviews. Fortunately, I don’t have to quote the whole paper because it’s available online. And you should go read it here:
http://www.psy.herts.ac.uk/pub/sjcowley/docs/baby%26bathwater.pdf (PDF).

John H. Whorter’s review in Language

Like, Cowley, McWhorter writes that Sampson successfully refutes Chomsker’s theory, saying that he “makes a powerful case that linguistic nativism […] has been grievously underargued, and risks looking to scientists in a hundred years like the search for phlogiston does to us now” (434). That’s putting it nicely, I think.

McWhorter raises concerns with some of Sampson’s methods, such as his discussion of hypotaxis and complexity, his refutation of Berlin and Kay’s classic color-term study, and WH-movement. McWhorter also worries that since Sampson only covers Chomsky’s writings up to 1980, his take-down of linguistic nativism may not be as strong as could be hoped because of the post-1980 development of the Principle and Parameters theory and minimalism (two theories which are meant to deal with, you guessed it, problems with linguistic nativism. Surprise!). While I agree that it would have been nice to see Sampson discuss these theories (since they have their own typical nativism problems), I don’t believe its absence is as critical as McWhorter claims, who questions Sampson’s decision to stop at 1980 because there’s nothing “solider to be pulled out of the bag.” (Sampson 2005: 165) McWhorter presumes that “certainly we would question a refutation of physics that used that justification to stop before string theory” (436). While I can get where he’s coming from, I think the bad analogy (which is something I’m pretty good at too) is particularly problematic here. Physics is founded on testable and falsifiable theories. Thanks to the contagious nature of nativism, linguistics these days is not.

What I especially like about McWhorter’s review is his acknowledgment that nativism has become something of a religion in linguistics. Commenting on the suspicious lack of response to Samspon’s book by nativists, McWhorter writes:

It may well be that Chomsyans harbor an argumentational firepower that would leave S[ampson] conclusively out-debated just as Chomsky’s detractors were in the 1960s and 1970s. But if such engagement is not even ventured, then claims that linguistic nativism is less a theory than a cult start looking plausible. (McWhorter 2008: 237)

Further Reading

This series of posts is by no means a review of all that has been said about UG or linguistic nativism. For those who wish to learn more, I suggest the following books.

The cultural origins of human cognition by Michael Tomasello

Tomasello’s book is a wonderful explanation of how children learn to speak and how human cognition does not need any innate language faculty. The theory he lays out has been called the Theory of Mind, which is an awful name, but it makes much more sense than anything I have ever read by nativists. Tomasello even has a few words for the nativists:

It is very telling that there are essentially no people who call themselves biologists who also call themselves nativists. When developmental biologists look at the developing embryo, they have no use for the concept of innateness. This is not because they underestimate the influence of genes – the essential role of the genome is assumed as a matter of course – but rather because the categorical judgment that a characteristic is innate simply does not help in understanding the process. (Tomasello 2000: 49)

If Chomskers’ theory left you shaking your head, and Sampson’s didn’t quite measure up, I highly recommend checking out Tomasello. As a bonus, this book is very much aimed at a wide audience, so three years of linguistics courses are not required.

What counts as evidence in linguistics, ed. by Martina Penke and Anette Rosenbach

This book is a collection of essays which address how the opposing fields in linguistics, formalism (or UG proponents) and functionalism, treat evidence in their research. The papers are excellent, not only because the authors are preeminent scholars in their fields, but also because each paper is followed by a response from an author of the opposing field. Even better, the responses are followed by replies from the author(s). It’s definitely on the hard-core linguistics side, so dabblers in this debate beware. As a example of what it contains, however, here is a link to a response to one of the articles by Michael Tomasello: http://www.eva.mpg.de/psycho/pdf/Publications_2004_PDF/what_kind_of_evidence_04.pdf (PDF). Not to toot his own horn, but it really lays bare what scholars are up against when they attempt to engage nativists.

 

 

References

Cowley, Stephen J. 2001. “The baby, the bathwater and the ‘language instinct’ debate”. Language Sciences 23: 69–91. http://www.psy.herts.ac.uk/pub/sjcowley/docs/baby%26bathwater.pdf

Longa, Victor M. 1999. “Review article”. Linguistics 37(2): 325–343. http://dx.doi.org/10.1515/ling.37.2.325 (requires access to Linguistics).

McWhorter, John H. 2008. “The ‘language instinct’ debate (review)”. Language 84(2): 434–437. http://www.jstor.org/stable/40071054 http://dx.doi.org/10.1353/lan.0.0008 (requires access to either JSTOR or Project MUSE).

Penke, Martina and Anette Rosenbach (eds.). 2007. What counts as evidence in linguistics: The case of innateness. Amsterdam & Philadelphia: John Benjamins. http://benjamins.com/#catalog/books/bct.7/main

Sampson, Geoffrey. 1999. “Reply to Longa”. Linguistics 37(2): 345–350. http://dx.doi.org/10.1515/ling.37.2.345 (requires access to Linguistics, but a “submitted” online version can be found on Sampson’s site here: http://www.grsampson.net/ARtl.html)

Tomasello, Michael. 2000. The cultural origins of human cognition. Cambridge: Harvard University Press. On Amazon. On Abe Books. On Barnes&Noble.

 

 

Up next: Punctuation..? by User design.

 

 

* A long, long time ago, I know. But I decided to focus all my powers on writing my Master’s thesis, which meant this blog got the shaft. Now that’s done and we’re back in business, baby. Go back up for the sweet, sweet linguistic goodness.

Book Review: The Language Instinct Debate by Geoffrey Sampson

The following is a book review and the second post in a series. The first post discussed Steven Pinker’s The Language Instinct . This post discusses Geoffrey Sampson’s The Language Instinct Debate, which is a critique of Pinker’s book. The third post will discuss some of the critics and reviews of Sampson’s book.

In a comment on the first post in this series, linguischtick (who has an awesome gravatar, by the way) pointed out that I didn’t mention two key points of the Chomskers (Chomsky + Pinker + their followers. Nom.) theory. As this post is about a book which is a direct “response to Steven Pinker’s The Language Instinct and Noam Chomsky’s nativism,” it would be good to remind ourselves of the claims that nativists make. Below are the claims along with some comments on them.

1. Speed of acquisition

Chomskyian linguists claim that kids learn language remarkably fast, so fast that it must be innate. But fast compared to what? How do we know kids don’t learn language very slowly? Chomskers has no answer. Sampson says this and then very cleverly points out that Chomsky has never supplied an amount of time it should take kids to learn language because “he argues that the data available to a language learner are so poor that accurate language learning would be impossible without innate knowledge – that is, no amount of time would suffice” (37, emphasis his).

2. Age dependence

Chomskers claim that the language instinct theory is supported by how our ability to learn a language diminishes greatly around puberty. Sampson quickly refutes this claim by showing how the evidence on which Chomskers based his claim fails “to distinguish language learning from any other case of learning” and that it is “perfectly compatible with the view that learning as a general process is for biological reasons far more rapid before puberty than later.” (41, emphasis his) So we see that leap of faith again. The evidence doesn’t suggest a language instinct, but that doesn’t stop Chomskers from jumping to that conclusion.

3. Poverty of the Stimulus

This is a major part of the Chomskers argument (and the only one that can be shortened into a perfectly applicable acronym – POS). Put simply, it goes like this: kids are not supplied with enough language info by their community to enable them to learn to speak. This is what Pinker was talking about when he snidely called Motherese – the style adults use when speaking to children – “folklore”. The poverty of the stimulus is a crazy idea, but don’t worry, it’s completely wrong. First, once linguists started researching Motherese, they found that it was much more “proper” than anyone had assumed. Sampson references one study that found “only one utterance out of 1500 spoken to the children was a disfluency.” (43) Chomskers also claim that some linguistic features never occur in spoken language and yet children learn the rules for them anyway. But wait a minute, has Chomskers ever looked for these mysterious linguistic features that never occur? Of course not. That’s not how they roll.

Sampson gives them a taste of their own medicine by writing

‘Hang on a minute,’ I hear the reader say. ‘You seem to be telling us that this man [Chomsky] who is by common consent the world’s leading living intellectual, according to Cambridge University a second Plato, is basing his radical reassessment of human nature largely on the claim that a certain thing never happens; he tells us that it strains his credulity to think that this might happen, but he has never looked, and people who have looked find that it happens a lot.’
Yes, that’s about the size of it. Funny old world, isn’t it! (47)

Another aspect of this piece of shit poverty of the stimulus argument is the so-called lack of negative evidence. This idea claims that kids aren’t given evidence of which types of constructions are not possible in language. It leads one to wonder how children could possibly learn which sentences to exclude as non-language? Sounds pretty interesting, huh? There must be a language instinct then, right? Sampson bursts Chomskers bubble:

The trouble with this argument is that, if it worked, it would not just show that language learning without innate knowledge is impossible: it would show that scientific discovery is impossible. We can argue about whether or not children get negative evidence from their elders’ language; but a scientist certainly gets no negative evidence from the natural world. When a heavy body is released near the surface of the Earth, it never remains stationary or floats upwards, displaying an asterisk or broadcasting a message ‘This is not how Nature works – devise a theory which excludes this possibility!’ (90)

4. Convergence of grammars

This claim wonders how both smart and dumb people grow up speaking essentially the same language.
Except they don’t, so forget it. Other linguists – the kind that like evidence and observable data – have proven that people don’t speak the same.

5. Language universals

This is the idea that there are some structural properties which are found across every language in the world, even though there is no reason why they should be (since they’re not necessary to language). This is where Universal Grammar comes in. Sampson devotes a chapter to this broad argument and in one of the many parts that make this book an excellent read, he very cleverly takes the argument down by pointing out that universals are better evidence of the cultural development of language than they are of the biological innate theory of language. Using a theory developed by Herbert Simon, Sampson shows that, basically, the structural dependencies that Chomskers is so fond of arose out of normal evolutionary development because evolution favors hierarchical structure. Complex evolutionary systems – something Sampson argues language is – are hierarchically structured for a reason, they do not have to be innate.

If this is the crux of the language instinct argument, it’s almost laughable how easily it falls. As Sampson notes, even Chomskers doesn’t think it carries weight.

Steven Pinker himself has suggested that nativist arguments do not amount to much. In a posting on the electronic LINGUIST List (posting 9.1209, 1 September 1998), he wrote: ‘I agree that U[niversal G[rammar] has been poorly defended and documented in the linguistics literature.’ Yet that literature comprises the only grounds we are given for believing in the language universals theory. If the theory is more a matter of faith than evidence and reasoned argument even for its best-known advocate, why should anyone take it seriously? If it were not that students have to deal with this stuff in order to get their degrees, how many takers would there be for it? (166)

Even a blind squirrel finds a nut sometimes

The really sad thing is that Universal Grammar is the crux of the Chomskers argument. Sampson writes that “at heart linguistic nativism is a theory about grammatical structure.” (71) More importantly, it’s a theory that gathers all the “evidence” it thinks support its beliefs and dismisses any that do not. It is Confirmation Bias 101.

But don’t take my word for it. Just before he knocks down the innatist belief that tree structures prove there’s a language instinct, Sampson points out that Chomskers don’t even know how to follow through with their own thoughts. He writes

Ironically, though, having been the first to realize that tree structure in human grammar is a universal feature that is telling us something about how human beings universally function, Chomsky failed to grasp what it is telling us. The universality of tree structuring tells us that languages are systems which human beings develop in the their gradual, guess-and-test style by which, according to Karl Popper, all knowledge is brought into being. Tree structuring is the hallmark of gradual evolution. (141)

Hey-o!

So don’t violate or you’ll get violated

OK, right now the reader might think I’ve been too hard on Chomskers. Let me assuage your concerns. I’m a firm believer in treating people with the respect they deserve. So when I say that Chomskers have their heads stuck firmly up their own asses, it’s because saying “the facts don’t support their claims” is not what they deserve. A group of scientists that hates facts deserves derision. Researchers in every field use observable data to come to conclusions. Their publications are part of an ongoing debate among other researchers, who can support or refute their claims based on more data. Everyone plays by these rules because they are in everyone’s best interest. All infamous academic quarrels aside, Chomskers would prefer not to back up their claims with observable data nor engage in any kind of debate with scientists. The bum on the street shouting that the world is going to end has the advantage of being bat-shit crazy. What’s Chomskers excuse?

I suppose they could say that they are well-established. But in my mind that just points out the reasons for their unscientific actions. What’s going to happen to those grants and faculty positions if people stop believing in Chomskers’ witchcraft? Sampson writes

“Nativist linguistics is now the basis of so many careers and so many university departments that it feels itself entitled to a degree of reverence. Someone who disagrees is expected to pull his punches, to couch his dissent in circumspect and opaquely academic terms – and of course, provided he does that, the nativist community is adept at verbally glossing over the critique in such a way that, for the general reader, not a ripple is left disturbing the public face of nativism. But reverence is out of place in science. The more widespread and influential a false theory has become, the more urgent it is to puncture its pretensions. Taxpayers who maintain the expensive establishment of nativist linguistics do not understand themselves to be paying for shrines of a cult: they suppose that they are supporting research based on objective data and logical argument.” (129)

Chomskers have been selling you snake oil for 60 years, they can’t give it up now. They have to double-down. Now’s the time to really push the limits of decency in academia. Take a look:

“Paul Postal discusses in his Foreword the fact that my critique of linguistic nativism has been left unanswered by advocates of the theory. I am not alone there: various stories go the rounds about refusals by leading figures of the movement to engage with their intellectual opponents in the normal academic fashion, for fear that giving the oxygen of publicity to people who reject nativist theory might encourage the public to read those people and find themselves agreeing. […] This interesting point here is a different one. Nowhere in Words and Rules does Pinker say that he is responding to my objection. My book introduced the particular examples of Blackfoot and pinkfoot into this debate, and they are such unusual words that Pinker’s use of the same examples cannot be coincidence. He is replying to my book; but he does not mention me.” (127-8)

I don’t think I need to point out the shamefulness of such actions.

I read Steven Pinker and all I got was this lowsy blog post

Reading Sampson after reading Pinker is a lesson in frustration, but not because of any problems with Sampson’s book. On the contrary, The Language Instinct Debate is very well written. Sampson not only clearly points out why Chomsky and Pinker’s theories are wrong, but he does so in a seemingly effortless way. Sometimes this is obvious because Chomskers didn’t even look at the evidence, they just made something up and held out their hands. Sometimes this is frustrating because I wasted time reading Pinker’s 450-page sand castle that Sampson crumbled in less than half of that. The Language Instinct Debate may leave you wondering how you ever thought Chomskers was on to something when Sampson makes the counter-evidence seems so blatantly obvious.

In the next and final post of this series, I’ll talk about some of the reviews and critics of Sampson’s book. For now, I’ll leave you with how Chomskers’ refusal to check the evidence or believe anyone who has, along with their outstretched hand and their demand that you believe them, has inspired me to write a book of my own. It’s called Paris is the Capital of Germany, China is in South America, and Other Reasons Why I Hate Maps.

It’s due out at the end of never because ugh.

 

 

References

Sampson, Geoffrey. 2005. The Language Instinct Debate. London & New York: Continuum.

Book Review: The Language Instinct by Steven Pinker

The following is a book review and the first post in a series. This post discusses Steven Pinker’s The Language Instinct. The second post discusses Geoffrey Sampson’s The Language Instinct Debate, which is a critique of Pinker’s book. The third post will discuss some of the critics and reviews of Sampson’s book.

In order to talk about Steven Pinker and linguistics, I first have to explain a bit about Noam Chomsky and linguistics. Chomsky started writing about linguistics in the 1950s and through sheer force became a major player in the field. This did not, however, mean that any of Chomsky’s theories carried weight. On the contrary, they were highly speculative and devoid of empirical evidence. Chomsky is the armchair linguist extraordinaire. The audacity of his theory, however, was that it proposed humans are born with something called Universal Grammar, an innate genetic trait that interprets the common underlying structure of all languages and allows us to effortlessly learn our first language. Extraordinary claims require extraordinary evidence, but it’s been over 50 years and the evidence has never come. On top of that, the linguist John McWhorter (who partly inspired this series of posts) has said that “There is an extent to which any scientific movement is partly a religion and that is definitely true of the Chomskyans.” As we’ll see, the analogy runs much deeper than that.

What you need to know for this review is that Steven Pinker is a Chomskyan. Therefore, this post will discuss not only The Language Instinct, but also the general theories behind it, since Pinker’s book is at the forefront of carrying on the (misguided) notions of Chomskyan linguistics. It’s not going to be pretty, but trust me, I know what I’m doing. To make things a bit easier on us all, instead of referring to Chomsky and Pinker and their cult followers separately, I’m going to call them Chomskers. (LOLcat says “meow”?).

Steven Pinker has got a bridge to sell you

On page 18, Pinker contrasts an innate origin of language with a cultural origin to define what he means by a language “instinct”:

Language is not a cultural artifact that we learn the way we learn to tell time or how the federal government works. Instead, it is a distinct piece of biological makeup of our brains. Language is a complex, specialized skill, which develops in the child spontaneously without conscious effort or formal instruction, is deployed without awareness of its underlying logic, is qualitatively the same in every individual, and is distinct from more general abilities to process information or behave intelligently. For these reasons some cognitive scientists have described language as a psychological faculty, a mental organ, a neural system, and a computational module. But I prefer the admittedly quaint term ‘instinct.’ It conveys the idea that people know how to talk in more or less the sense that spiders know how to spin webs.

It’s possible to deconstruct the incongruities of that passage, but that’s a job for another post (specifically, the one right after this, Sampson’s critique of Pinker). For now, just replace “language” in that passage with “making a sandwich” because to most linguists, the idea that our ability to make a sandwich is a “distinct piece of biological makeup of our brains” makes just as much sense as Pinker’s notion about language. So… Great argument, let’s eat!

Instead of focusing on the logical arguments that refute Pinker’s theory, what I want to discuss here is the frustration that comes from reading The Language Instinct and Chomskers literature when you know there are other more tenable theories out there.

Don’t drink the Kool-Aid

The first problem has to do with what I’ll call the Chomskers’ Leap of Faith. This involves the theory that there is an underlying structure common to all languages and that its form and reasoning is innate to the human brain. It is called Universal Grammar. In a sense, our brains give us a basic language structure that we can then extrapolate to our mother tongue, whatever that may be. To Chomskers, that is how people learn how to speak so quickly – they already have the fundamental tool, or language instinct, needed to develop language.

How did Chomskers arrive at such a theory, you ask? Simple, they made it up. Universal Grammar was conjured out of thin air (i.e. Chomsky’s mind) and after five decades there is still no solid evidence of its existence. This is the leap of faith I’m talking about. A good example of it comes from two bullet points on page 409:

  • Under the microscope, the babel of languages no longer appear to vary in arbitrary ways and without limit. One now sees a common design to the machinery underlying the world’s languages, a Universal Grammar.
  • Unless this basic design is built in to the mechanism that learns a particular grammar, learning would be impossible. There are many possible ways of generalizing from parents’ speech to the language as a whole, and children home in on the right ones, fast.

These ideas are completely speculative (also known as – “pure bullshit”), but they illustrate Pinker’s leap of faith and circular logic. He thinks that because kids speak, they must have Universal Grammar and because they have Universal Grammar, they must speak. Chomskers love circular logic. It’s what their temple is built on. Pinker’s The Language Instinct is 450 pages of that kind of reasoning. Nothing in the 400 pages leading up to those bullets requires a belief in Universal Grammar. It’s just cherry-picked, misleading, or outright refuted studies.

And the Lord said unto Chomskers…

Another infuriating aspect of reading Chomskers is the pretentiousness of their prose. One gets the feeling that they are reading the Word of God (Noam Chomsky, to the Chomskers) sent down from on high. Instead of taking other theories into account, or even trying to prove why other theories are wrong, they simply dismiss them presumptuously. And they lead unsuspecting readers to do the same. Take this quote from page 39:

First, let’s do away with the folklore that parents teach their children language. No one supposes that parents provide explicit grammar lessons, of course, but many parents (and some child psychologists who should know better) think that mothers provide children with implicit lessons […] called Motherese.

Calling “Motherese” – which is a seriously studied and empirically proven phenomenon – “folklore” doesn’t make it so. Why Pinker would do such a thing seems strange at first, but you have to realize that that’s what Chomskers do. That is how they deal with other solid linguistic studies that have the possibility of refuting their claims (which, remember, have no empirical evidence). The attitude of contempt didn’t work for Noam Chomsky and it’s not going to work for Steven Pinker.

So why does he do it? As the linguist Pieter A. Seuren wrote in Western Linguistics: An Historical Inroduction:

Frequently one finds [Chomsky] use the term ‘exotic’ when referring to proposals or theories that he wishes to reject, whereas anything proposed by himself or his followers is ‘natural’ or ‘standard’. […]
One further, particularly striking feature of the Chomsky school must be mentioned in this context, the curious habit of referring to and quoting only members of the same school, ignoring all other linguists except when they have been long dead. The fact that the Chomsky school forms a close and entirely inward looking citation community has made some authors compare it to a religious sect or, less damningly, a village parish. No doubt there is a point to this kind of comparison, but one should realize that political considerations probably play a larger part in Chomskyan linguistics than is customary in either sects or village parishes. (525)

The problem again lays in Chomskers’ impression that only their theory exists. The bored, novice, or uncritical reader – and, you know, anyone being tested on this book – is liable to take Pinker at face value. In Chapter 8, aptly titled “The Tower of Babel,” Pinker really lays on the God-given truth of Universal Grammar. He writes

What is most striking of all is that we can look at a randomly picked language and find things that can sensibly be called subjects, objects, and verbs to begin with. After all, if we were asked to look for the order of subject, object, and verb in musical notation, or in the computer programming language FORTRAN, or in Morse code, or in arithmetic, we would protest that the very idea is nonsensical. It would be like assembling a representative collection of the world’s cultures from the six continents and trying to survey the colors of their hockey team jerseys or the form of their harakiri rituals. We should be impressed, first and foremost, that research on universals of grammar is even possible!

Except we shouldn’t. Chomskers have been pulling their “theories” out of your collective asses for decades now. Why would anyone be impressed that “research” on something they made up is “possible?” Are you impressed with people in tin foil hats researching UFO landings? That’s not to mention the fact that we invented the concepts of “subject” and “verb” to apply to language, just like we invented “base 10” and “base 60” to apply to arithmetic. Looking for those in language would be nonsensical. But looking for something that could sensibly be called a base in any randomly picked counting system would be – shock! awe! – possible and completely unimpressive. Pinker does a disservice to the reader by equating the existence of something like nouns in all of the world’s languages to the “existence” of Universal Grammar. There is evidence for one, not the other. The Bible tells us that the world was created. That is a fact. The Bible also tells us that God created the world. That is a statement of belief.

In a footnote, Seuren quotes Pinker’s admiration for Chomsky and then says “It seems that Pinker forgot to take into account the possibility that there may also be valid professional reasons for uttering severe criticisms vis-à-vis Chomsky.” (526) In the same way that a Catholic priest is unlikely to quote from the Koran in his sermon, Chomskers will not address any other theories in their writing. That’s alright for a parish, it’s not alright for academia.

At this point you may be wondering how the Chomskers’ theories have survived for so long. It has to do with their outlandishness and their unwillingness to engage with critics. As Seuren notes, “And since no other school of linguistics would be prepared to venture into areas of theorizing so far removed from verifiable facts and possible falsification, the Chomskyan proposals could be made to appear unchallenged.” (284) By the time other linguists took note of what the Chomskers were up to, it was too late. They had already established their old boys club. What’s interesting is that linguists need not bother trying to tear down the Chomskers, since books like The Language Instinct demonstrate that the closer Chomskers try to bring their theory to verifiable facts, the more they falsify it. I don’t know if Pinker realized this, but writing about shit as if it were Shinola has never been a problem for Chomskers. In a subsection titled No arguments were produced, just rhetoric Seuren writes,

Despite twenty-odd years of disparagement from the side of Chomsky and his followers, one has to face the astonishing fact that not a single actual argument was produced during that period to support the attitude of dismissal and even contempt that one finds expressed, as a matter of routine, in the relevant Chomsky-inspired literature. Quasi-arguments, on the contrary, abounded. (514)

Linguistics does not work that way. Good night!

I told you the religion analogy was going to be more appropriate than it seemed at first. Belief in Universal Grammar is very much like belief in a god – you can’t see it, but it’s there. But that’s not science! To some people, the sunrise is proof that god exists. To astronomers, the sun does not actually “rise”. To Chomskers, speech is proof that Universal Gammar exists. To linguists, speech does not require such a leap of faith.

With his hawkish proclamations of the existence of Universal Grammar and his complete dismissal of any criticism, Noam Chomsky has done more harm than good to linguistics. Seuren says that “this behavior on Chomsky’s part has caused great harm to linguistics. Largely as a result of Chomsky’s actions, linguistics is now sociologically in a very unhealthy state. It has, moreover, lost most of the prestige and appeal it commanded forty years ago.” (526)

In an ironic turn of events considering his liberal political leanings, Chomsky and his ilk have become the Fox News of linguistics – they pull their theories out of thin air, shout them at the top of their lungs, and ridicule any who say otherwise. And just like the scare tactics of Fox News, the idea of a language instinct sells. McWhorter quite politely explains the Chomskers’ zealotry by saying “they want to find [a language instinct], they’re stimulated by this idea – as far as the counter evidence, most of them are too busy writing grants to pay much attention.” But that’s being too kind. If you ask me, bullshitting is their business… and business is good.

All this is unfortunate

To sum up, is there a language instinct? Maybe. Does Steven Pinker present a valid case for a language instinct? No.

To return to our religious analogy, you can believe in the Christian god, or in Buddha, or in the Flying Spaghetti Monster and there’s nothing wrong with that. But you can’t prove any of these gods exist (apologies to the Pastafarians, who have presented some very compelling evidence). Neither can Chomskers prove that a language instinct exists. I suppose there’s nothing wrong with believing it does, but you better have some facts to back up your theory if you want others to follow. Smoke and mirrors are interesting when used in magic shows, but infuriating when used in academic prose.

With a sly patronizing of those who cannot put up with Chomsky’s dense prose and a crafty acknowledgement of Chomsky’s intellectual superiority, Pinker writes

And who can blame the grammarphobe, when a typical passage from one of Chomsky’s technical works reads as follows? […quotes some mumbo jumbo from Chomksy…] All this is unfortunate […] Chomsky’s theory […] is a set of discoveries about the design of language that can be appreciated intuitively if one first understands the problems to which the theory provides solutions. (104)

Pinker complains about others who seem to have not read Chomsky, but I get the sense that Chomsky is the only linguist Pinker has ever read. Because either Pinker knows of other linguistic theories and he’s not telling (i.e., he’s being deceptive) or he doesn’t know of them at all (i.e., he’s hasn’t done his research). Either way, it’s poor scholarship. As we’ll see in the next post, Pinker knows of Sampson’s theory and he uses examples from Sampson’s book without acknowledgment. That’s also poor scholarship, but of the kind that is common to Chomskers.

References

McWhorter, John. 2004. “When Language Began”. The Story of Human Language. The Great Courses: The Teaching Company. Course No. 1600.

Pinker, Steven. 1994. The Language Instinct: The New Science of Language and Mind. Penguin Group: London.

Seuren, Pieter A. M. 2004 (1998). Western Linguistics: an Historical Introduction. Oxford; Malden (MA): Blackwell.

 

 

 

Up next: A review of The Language Instinct Debate by Geoffrey Sampson.

 
[Update – This post originally had Noam Chomsky’s name written as “Chompsky”. Oops. Hehe. A word to the wise: Before adding words to your word processor’s dictionary, make sure they’re spelled correctly. Hat tip to Angela for pointing out the mistake.]

Whatever Happened to Innovation in the USA?

Or, why the difference between innovation from above and innovation from below matters

In a recent interview on NPR’s Science Friday program, everyone’s favorite astrophysicist* Neil deGrasse Tyson talked about innovation. Tyson has written a new book which discusses, among other things, the way that society benefited from innovation in the space race of the 1960s. With this post I want to tell you about my own experience with innovation in the USA and the two different types of innovation there are, an idea that should be raised more often.

Tyson argues that the innovation needed in space travel – the innovation needed to go farther and farther every single day – brought untold benefits to society through the engineers it needed, the products it created (which were applied elsewhere), through the economy it stimulated, etc. He has a point, but I’m interested to see if he mentions that the reason society benefits from such innovation is because it is the type of innovation that is writ large over society. The space race was innovation on a large scale. It was the driving force (and in some ways the weapon) of the Cold War. Hence everyone in society was indebted to this large scale innovation. When everyone has a stake in the innovation of a country, as they did in the space race, there is a collective agreement of the benefit of innovation. It doesn’t matter what it is, so long as it is innovative.

Innovation on a small scale, however, is a different story. Innovation from below, as it could be called, takes a entirely different mind set. In business, it sometimes comes out of necessity – innovate or go bust. This is similar to the innovation of the space race. But micro-innovation (let’s settle on this term, shall we?) also comes about unforced. Sometimes a clever person, whose business is more or less fine the way it is, recognizes the benefits of an innovative idea and, to use the official business-speak term, capitalizes on it. More often than not, however, micro-innovation is passed over. Allow me to offer an example.

I used to work for a company in the US. This company had a program to welcome new employees into the fold. The program was called something like Welcome, New Employees, Into The Fold™. In this program, new employees were asked to read a chapter from a best-selling business how-to manifesto. The chapter talked about the real-life innovative leader who built an innovative tech company on innovation. Naturally, I assumed the take-away message was supposed to be “Innovation. We like. So should you.” I was informed later that it was “Do as we say, not as our favorite manifesto chapter tells you.” Makes you wonder why they bothered to waste the paper, but then again, that’s what manifestos are all about.

Shortly after I left this company (on good terms), I offered them an innovative way to increase their sales. I would use my training in linguistics to study their marketing campaigns and I would to do it for free. The benefit for me was that my research would allow me to write my master’s thesis. It was a win-win. (I’m intentionally being vague about my master’s thesis since, so far as I can tell, it really is innovative. It’s at least the kind of research that could launch a career in either business or academia, depending on the results. Interested parties can feel free to contact me.)

And yet, like most micro-innovation cases, my idea was denied. It’s hard to believe, I know, but there are some obvious answers as to why. First, business professionals are a cautious to cowardly bunch. If you told them of the chances that they would be killed in a car accident on the way to work, they would find a reason to work from home. So when faced with the opportunity to increase their sales by doing nothing but allowing a post grad student to analyze their marketing texts, they find ways to say no, to brush it off, or to disregard it. Creating one’s own misfortune is not unheard of, even in the business world.

A second reason why my idea was turned down has to do with the “business as usual” mind set. My former employer makes millions each year, They fear change because they assume it’s going to be change for the worse. More importantly, while there’s no telling what kind of profit my innovation could have brought them, it’s safe to assume it would have been in the thousands of dollars. That’s chump change for mid-sized American companies. Why should they take on my idea when business as usual is already bringing in millions?

Finally, and most relevant to the macro-innovation that Mr. Tyson talked about, is the fact that micro-innovation is not established in the US. There is no culture of post graduate students doing research for companies to complete their degree. There is a culture of small innovations making big waves, but these are all either start-ups or internal happenings at large companies (like 3M). There is no zeitgeist of micro-innovation, no pressure from society on creating it day after day, and no agreement that it brings untold benefits to those who seize it. Yet it comes up all the time.

This last notion is related to the type of micro-innovation that comes out of necessity because companies can live or die on it. Most people with an innovative idea have a very good reason for why it will be successful. If they are declined by one company, they are not likely to give up on the idea. They are simply going to move on to the next company. And that spells danger for the companies who passed on the innovation. So instead of companies living by the “be innovative or die” motto, there are also those remembered by the “we died because we were not innovative” warning.

And nobody writes chapters in business books about those companies.


* Except maybe this guy.