Feeds:
Posts
Comments

As a dictionary of English vocabulary and phrases, the American English Compendium by Marv Rubinstein is satisfactory. It is 500 pages long so it covers a lot of ground. As a book of American English or Americanisms, this book is not what it seems. A brief glance at any of the pages will make you question if the entries really are words or phrases that are exclusive to American English. And a comparison to another source will most likely show that they are not. As a commentary on language, however, this book is terrible.

American English Compendium

Cover of American English Compendium by Marv Rubinstein. Published by Rowman & Littlefield. Cover design by Neil Cotterill.


The problems start on the first page of Chapter 1. The author defends the use of the term American English by proclaiming it is better than British English:

Dynamic. Versatile. Imaginative. Capable of capturing fine nuances. All these terms can truthfully be used to describe the American language. “Don’t you mean the ‘English language’?” some readers may ask. No, I mean the American language. Over many years, American English has vastly expanded and changed, a transmutation that has left it only loosely connected to its mother tongue, British English. (p. 3)

Although no one would (or should) argue that American English is a term that needs to be defended, the imaginary readers in this passage come off as more knowledgeable about language than the author. Are we really to believe American English is the only variation of English that is “dynamic” or “imaginative” or “capable of capturing fine nuances”? The problem gets compounded when the author recognizes the influence of American English in England, but seems to suggest that the reverse is not happening:

[W]hile there are numerous localisms [in countries where English is the primary language], more and more the terminology, idioms, slang, and colloquialisms smack of American English. Even in England this is slowly but surely happening. (p. 3)

And it only get stranger from there. On the next page we are told:

Things have changed so much, and the use of American English in international communications has grown so much, one can now safely say that most English speakers use (to a greater or lesser degree) Americanized English – that is, the American language. And rightly so. The American language is so much richer and more adventurous. British English neve stood a chance. (p. 4, emphasis mine)

Excuse me, Mr. Rubinstein, but H. G. Wells, J. K. Rowling, Grant Morrison, Agatha Christie and a thousand other British writers would like a word.

After this “proof” that ‘Murican English is better than British English, readers are given a “microcosm of what is happening” (p. 4) in the world. Rubinstein relates a story from an article by New York Times columnist and economist Thomas Friedman about how a senior Moroccan official is sending his kids to an American school even though he was educated in a French school. Rubinstein uses this story to claim that

There are now several American schools in Casablanca, each with a long waiting list. In addition, English (primarily American English) courses are springing up all over that country. If this is happening in Morocco, a country with long-lasting French connections and traditions, it is undoubtedly happening everywhere. The American language is becoming ubiquitous. (p. 5)

But it needs to be noted that Friedman does not claim that these English-language schools which are supposedly popping up all over Casablanca are teaching American English. Nor are readers given any proof that Casablanca is an example of what is happening around the world. I am very hesitant to believe it is. While it’s a cute story, this kind of claim needs to be backed up with evidence. How do we know that the English being taught in these schools is strictly British or American or some variation of English as an international language? We have to take the Rubinstein’s word for it, but as we have seen with his dismissal of British English, he is not to be trusted when it comes to linguistics commentary.

Further down the page, in a section titled The Richness of the American Language, Rubinstein claims that “much of the richness of the American language lies in the fact that it has absorbed words and expressions from at least fifty other languages.” (p. 5) He lists some examples, but completely fails to acknowledge the fact that many of them, such as brogue and orangutan and typhoon, were originally borrowed into British English and then used by Americans.

Rubinstein then presumes readers will ask how the American language differs from other languages, which obviously also use foreign words and phrases. But the answer given is just as confused as the question. The author states that “there is no question that American English has been like a sponge absorbing and modifying words from many other languages” (p. 7) without realizing (or reporting) that this is true of English in general, not American English in particular. This is actually true of languages in general, although English does appear to be particularly greedy when it comes to borrowing words from other languages.

Later, there is a fairly reasonable, but short and undefinitive, discussion of “Black English” (African American Vernacular English). The section unfortunately ends with this quote: “Educated African Americans, of course, use standard American English” (pp. 11–12). Well, good for them.
Things get really bonkers in the section on compounding, which includes this howler:

Compound words exist in almost all languages, but never anywhere near the extent that they do in American English. […] during the last few decades, compounding has reached epidemic proportions. The vast majority of compound words are of relatively recent origin languagewise (p. 15)

This is nonsense. Does the author know how any other languages work? Finnish compounds words much more than English does. In fact, the syntax of Finnish demands it, unlike in English where compounding is very often a matter of style. And how do we know that the “vast majority” of compound words are not old? Let’s say “the last few decades” goes back to 1960. Do you really think words such as outcast, outdoors, outlook, output, overcome, overdoes, overdue, oversee, oddball, goofball, downfall, and downhill (all words supplied by the author) were made compound words after 1960?

Here are some other WTFs in this book along with the thoughts I had after reading them:

In general [the English speakers of Australia, Canada, Guyana, India, Ireland, New Zealand, and South Africa] all understand each other, but, as you have seen in the previous chapter on American and British English, there are substantial differences. The same can be said of the English used in the other countries listed above. With a few exceptions, Canadian English consists of a blending of American and British English, but the other English-speaking countries have all developed their own unique and distinctive expressions (including slang and colloquialisms). (p. 267)

Hahahahaha! Fuck you, Canada! Get your own expressions, eh!

 

English is an Anglo-Saxon language with roots in Latin, the Romance Languages, and German. [No.] This means that most, if not all, English words are variations of foreign words, and such words have legitimately entered the language. (p. 281)

WHAT THE FUCK DOES THIS MEAN?!

 

The Oxford English Dictionary prides itself on keeping up to date, and it does pretty well (but not perfect) with including new words in its latest editions. Unfortunately, libraries with limited budgets these days do not always have the most recent revisions. Your best bet for researching neologisms is probably the Internet – for example, Google. (p. 403)

Because the OED is the only dictionary in the world. I’ve said it before and I’ll say it again: In linguistics research there is only the OED and Google. It’s a wonder we get anything done.

 

Chairman has become chairperson and has been further reduced to chair. But many gender-based terms remain unresolved. While, for example, policeman easily becomes police officer, other words and phrases resist change. One almost invariably hears expressions such as “Everyone to their own taste. [What? Who invariably hears this?] Grammatically incorrect [Nope!] but why risk offending potential female customers of advertised products? [Bitches be trippin’, amiright?] However, when a woman mans the controls of an aircraft, should the term be changed even though it denotes action, not identity? What should we now call a “manhole cover”? [Serious questions, you guys.] Note that we no longer have actresses; they all insist on being called actors. [How dare they?!] (p. 13)

Based on the claims about language alone, I would not recommend this book. I don’t know how someone writes a book about language and gets so much wrong. The word and phrase entries may be useful, but any online dictionary will have most if not all of them. Go there instead or get a proper reference book from a respected dictionary.

As the authors state in their foreword (pp. xii-xiii):

This book represents an attempt to defang the slang and crack the code. In writing this, we tried to think back to when we were new to Washington and wishing, like wandering tourists lost in a foreign city, that we had a handy all-in-one-place phrasebook.

I would say they have largely accomplished this. Dog Whistles, Walk-backs & Washington Handshakes is an up-to-date glossary of American political terms. I think that people interested in language and politics would find this book enjoying for a few reasons. First, the book is well referenced (always a plus). The authors are not trying to discover the first known use of some political code word, but rather to show that politicians from all sides use this type of language and that you are likely to come across it in tomorrow’s newspaper or news broadcast. So their references mostly come from very recent sources, which is refreshing. The foreword and introduction make nuanced points about language and slang, and the authors back up these points with references to reputable sources.

Dog Whistles has appeal for people who follow American politics, since although they are likely to already know some of the terms in here, they will probably find some they don’t know or haven’t thought about. That’s because the book isn’t just made up of eye-catching terms such as Overton window and San Fransisco values. Readers will appreciate the care that the authors have taken to explain each term. For example, here is the entry for the seemingly innocent term bold (p. 40):

Bold: A politician’s most common description of their own or their party’s proposals. It manages to be a punchy, optimistic-sounding break with conventional thinking and deliberately vague all at once.

Image copyright ForeEdge and University Press of New England

Image copyright ForeEdge and University Press of New England


But the book is not just for language and politics heads. In the introduction (p. ix), the authors recognize the problem that people who do not closely follow politics might have when reading about or listening to their representatives:

For most of the population – let’s call them “regular, normal people” – time spent listening to legislation, operatives, and journalists thrash over public policy on cable or a website can often result in something close to a fugue state, induced by the repeated use of words and phrases that have little if any connection to life as it is lived on planet Earth.

Later (p. 129), the authors explain the importance of their glossary by saying that:

Knowing the meanings of such specialized political terms can help cut through spin meant to obscure what’s really going on in a campaign. When politicians use the cliché, “The only poll that counts is the one on Election Day,” they really mean, “I wouldn’t win if the election were held today.”

I am all for educating people about the intricacies of language, especially when that means explaining the ways that politicians use words and phrases to trick people.
I am, however, not sure that all of the terms deserve being placed in this book. I feel like a glossary should include words that are at least nominally used by a group of people. But in their attempt to be current, the authors have included phrases such as hardship porn. This is a phrase coined by Frank Bruni of the New York Times and it only returns two hits on Google News – the July 2015 article in which Bruni coined it and an October 2015 book review in the Missoula Independent. However influential Frank Bruni is, this term has not caught on yet.

This is really nitpicking though (something us academics excel at, thankyouverymuch). I really found this book enjoyable. If you like politics, language, or both, you will probably enjoy it too. You can check out the interactive website here: http://dogwhistlebook.com/ and even suggest you own term.

 

 

References

McCutcheon, Chuck and David Mark. 2014. Dog Whistles, Walk-backs & Washington Handshakes: Decoding the Jargon, Slang, and Bluster of American Political Speech. ForeEdge: New Hampshire.

I assumed that the term dynamic duo must come out of comics. Comic book creators have long been coming up with alliterative epithets for their characters. Superman is the “Big Blue Boy Scout”, Supergirl is the “Maid of Might”, Batman is the “Caped Crusader”, Silver Surfer is the “Sentinel of the Spaceways”, Flash is the “Scarlet Speedster”, Wonder Woman is the “Amazing Amazon”, Spider-man is spectacular, the Avengers are (also) amazing, and the Four are fantastic. You get the point.

But having been around the etymological block a few times, I know that everything in language is older than you think it is. So I thought the phrase dynamic duo might come out of some earlier work. Perhaps it was reappropriated by comic book authors to describe the Dark Knight and the Boy Wonder.

Various dictionaries, however, claim that dynamic duo comes from the famous 1960s Batman TV series starring Adam West and Burt Ward, including NTC’s Dictionary of American Slang and Colloquial Expressions by Richard Spears and the Dictionary of American Slang by Kipfer and Chapman. The Batman TV series premiered in 1966, or 25 years after Robin was created. I found it hard to believe that it would have taken writers that long to come up with dynamic duo, so I decided to dig a little deeper.

Using Google Books, I found a volume of the Michigan Alumnus which includes the phrase. This was written in 1954:

The Michigan Alumnus - Google Books - Dynamic duo_small

So dynamic duo predates the Batman TV show. But when was it first applied to Batman and Robin? For that we have to dive into the comics.

On October 31, 1940, DC Comics published a story called “The Case of the Joker’s Crime Circus” in BATMAN #4. The story was written by Bill Finger and featured Bob Kane, George Roussos and Jerry Robinson on art. On page 7, we see the first time Batman and Robin are referred to as the dynamic duo:

Batman_4_1940_dynamic_duo

Of course, the term has moved out of the comics and can be applied to any “very special pair of people or things” (Spears 2000). And it may still be true that the Batman TV show is responsible for popularizing the term. But it warms my comic book loving heart to know that Bill Finger came up with dynamic duo.

Holy exciting etymology, Batman! It’s time to update our Bat-tionaries!

 

 

 

References

Spears, Richard. 2000. NTC’s Dictionary of American Slang and Colloquial Expressions 3rd Edition. NTC Publishing Group: New York.http://www.goodreads.com/book/show/2477414.NTC_s_Dictionary_of_American_Slang_and_Colloquial_Expressions

Kipfer, Barbara Ann and Robert L. Chapman. 2007. Dictionary of American Slang 4th Edition. HarperCollins Publishers. http://www.goodreads.com/book/show/2417989.Dictionary_of_American_Slang

In two recent papers, one by Kloumann et al. (2012) and the other by Dodds et al. (2015), a group of researchers created a corpus to study the positivity of the English language. I looked at some of the problems with those papers here and here. For this post, however, I want to focus on one of the registers in the authors’ corpus – song lyrics. There is a problem with taking language such as lyrics out of context and then judging them based on the positivity of the words in the songs. But first I need to briefly explain what the authors did.

In the two papers, the authors created a corpus based on books, New York Times articles, tweets and song lyrics. They then created a list of the 10,000 most common word types in their corpus and had voluntary respondents rate how positive or negative they felt the words were. They used this information to claim that human language overall (and English) is emotionally positive.

That’s the idea anyway, but song lyrics exist as part of a multimodal genre. There are lyrics and there is music. These two modalities operate simultaneously to convey a message or feeling. This is important for a couple of reasons. First, the other registers in the corpus do not work like song lyrics. Books and news articles are black text on a white background with few or no pictures. And tweets are not always multimodal – it’s possible to include a short video or picture in a tweet, but it’s not necessary (Side note: I would like to know how many tweets in the corpus included pictures and/or videos, but the authors do not report that information).

So if we were to do a linguistic analysis of an artist or a genre of music, we would create a corpus of the lyrics of that artist or genre. We could then study the topics that are brought up in the lyrics, or even common words and expressions (lexical bundles or n-grams) that are used by the artist(s). We could perhaps even look at how the writing style of the artist(s) changed over time.

But if we wanted to perform an analysis of the positivity of the songs in our corpus, we would need to incorporate the music. The lyrics and music go hand in hand – without the music, you only have poetry. To see what I mean, take a look at the following word list. Do the words in this list look particularly positive or negative to you?

a

ain’t

all

and

as

away

back

bitch

body

breast

but

butterfly

can

can’t

caught

chasing

comin’

days

did

didn’t

do

dog

down

everytime

fairy

fantasy

for

ghost

guess

had

hand

harm

her

his

i

i’m

if

in

it

looked

lovely

jar

makes

mason

life

live

maybe

me

mean

momma’s

more

my

need

nest

never

no

of

on

outside

pet

pin

real

return

robin

scent

she

sighing

slips

smell sorry

that

the

then

think

to

today

told

up

want

wash

went

what

when

with

withered

woke

would

yesterday

you

you’re

your

If we combine these words as Rivers Cuomo did in his song “Butterfly”, they average out to a positive score of 5.23. Here are the lyrics to that song.

Yesterday I went outside
With my momma’s mason jar
Caught a lovely Butterfly
When I woke up today
And looked in on my fairy pet
She had withered all away
No more sighing in her breast

I’m sorry for what I did
I did what my body told me to
I didn’t mean to do you harm
But everytime I pin down what I think I want
it slips away – the ghost slips away

I smell you on my hand for days
I can’t wash away your scent
If I’m a dog then you’re a bitch
I guess you’re as real as me
Maybe I can live with that
Maybe I need fantasy
A life of chasing Butterfly

I’m sorry for what I did
I did what my body told me to
I didn’t mean to do you harm
But everytime I pin down what I think I want
it slips away – the ghost slips away

I told you I would return
When the robin makes his nest
But I ain’t never comin’ back
I’m sorry, I’m sorry, I’m sorry

Does this look like a positive text to you? Does it look moderate, neither positive nor negative? I would say not. It seems negative to me, a sad song based on the opera Madame Butterfly, in which a man leaves his wife because he never really cared for her. When we include the music into our consideration, the non-positivity of this song is clear.


Let’s take a look at another list. How does this one look?

above

absence

alive

an

animal

apart

are

away

become

brings

broke

can

closer

complicate

desecrate

down

drink

else

every

everything

existence

faith

feel

flawed

for

forest

from

fuck

get

god

got

hate

have

help

hive

honey

i

i’ve

inside

insides

is

isolation

it

it’s

knees

let

like

make

me

my

myself

no

of

off

only

penetrate

perfect

reason

scraped

sell

sex

smell

somebody

soul

stay

stomach

tear

that

the

thing

through

to

trees

violate

want

whole

within

works

you

your

Based on the ratings in the two papers, this list is slightly more positive, with an average happiness rating of 5.46. When the words were used by Trent Reznor, however, they expressed “a deeply personal meditation on self-hatred” (Huxley 1997: 179). Here are the lyrics for “Closer” by Nine Inch Nails:

You let me violate you
You let me desecrate you
You let me penetrate you
You let me complicate you

Help me
I broke apart my insides
Help me
I’ve got no soul to sell
Help me
The only thing that works for me
Help me get away from myself

I want to fuck you like an animal
I want to feel you from the inside
I want to fuck you like an animal
My whole existence is flawed
You get me closer to god

You can have my isolation
You can have the hate that it brings
You can have my absence of faith
You can have my everything

Help me
Tear down my reason
Help me
It’s your sex I can smell
Help me
You make me perfect
Help me become somebody else

I want to fuck you like an animal
I want to feel you from the inside
I want to fuck you like an animal
My whole existence is flawed
You get me closer to god

Through every forest above the trees
Within my stomach scraped off my knees
I drink the honey inside your hive
You are the reason I stay alive

As Reznor (the songwriter and lyricist) sees it, “Closer” is “supernegative and superhateful” and that the song’s message is “I am a piece of shit and I am declaring that” (Huxley 1997: 179). You can see what he means when you listen to the song (minor NSF warning for the imagery in the video). [1]

Nine Inch Nails: Closer (Uncensored) (1994) from Nine Inch Nails on Vimeo.

Then again, meaning is relative. Tommy Lee has said that “Closer” is “the all-time fuck song. Those are pure fuck beats – Trent Reznor knew what he was doing. You can fuck to it, you can dance to it and you can break shit to it.” And Tommy Lee should know. He played in the studio for NIИ and he is arguably more famous for fucking than he is for playing drums.

Nevertheless, the problem with the positivity rating of songs keeps popping up. The song “Mad World” was a pop hit for Tears for Fears, then reinterpreted in a more somber tone by Gary Jules and Michael Andrews. But it is rated a positive 5.39. Gotye’s global hit about failed relationships, “Somebody That I Used To Know”, is rated a positive 5.33. The anti-war and protest ballad “Eve of Destruction”, made famous by Barry McGuire, rates just barely on the negative side at 4.93. I guess there should have been more depressing references besides bodies floating, funeral processions, and race riots if the song writer really wanted to drive home the point.

For the song “Milkshake”, Kelis has said that it “means whatever people want it to” and that the milkshake referred to in the song is “the thing that makes women special […] what gives us our confidence and what makes us exciting”. It is rated less positive than “Mad World” at 5.24. That makes me want to doubt the authors’ commitment to Sparkle Motion.

Another upbeat jam that the kids listen to is the Ramones’ “Blitzkrieg Bop”. This is the energetic and exciting anthem of punk rock. It’s rated a negative 4.82. I wonder if we should even look at “Pinhead”.

Then there’s the old American folk classic “Where did you sleep last night”, which Nirvana performed a haunting version of on their album MTV Unplugged in New York. The song (also known as “In the Pines” and “Black Girl”) was first made famous by Lead Belly and it includes such catchy lines as

My girl, my girl, don’t lie to me
Tell me where did you sleep last night
In the pines, in the pines
Where the sun don’t ever shine
I would shiver the whole night through

And

Her husband was a hard working man
Just about a mile from here
His head was found in a driving wheel
But his body never was found

This song is rated a positive 5.24. I don’t know about you but neither the Lead Belly version, nor the Nirvana cover would give me that impression.

Even Pharrell Williams’ hit song “Happy” rates only 5.70. That’s a song so goddamn positive that it’s called “Happy”. But it’s only 0.03 points more positive than Eric Clapton’s “Tears in Heaven”, which is a song about the death of Clapton’s four-year-old son. Harry Chapin’s “Cat’s in the Cradle” was voted the fourth saddest song of all time by readers of Rolling Stone but it’s rated 5.55, while Willie Nelson’s “Always on My Mind” rates 5.63. So they are both sadder than “Happy”, but not by much. How many lyrics must a man research, before his corpus is questioned?

Corpus linguistics is not just gathering a bunch of words and calling it a day. The fact that the same “word” can have several meanings (known as polysemy), is a major feature of language. So before you ask people to rate a word’s positivity, you will want to make sure they at least know which meaning is being referred to. On top of that, words do not work in isolation. Spacing is an arbitrary construct in written language (remember that song lyrics are mostly heard not read). The back used in the Ramones’ lines “Piling in the back seat” and “Pulsating to the back beat” are not about a body part. The Weezer song “Butterfly” uses the word mason, but it’s part of the compound noun mason jar, not a reference to a brick layer. Words are also conditioned by the words around them. A word like eve may normally be considered positive as it brings to mind Christmas Eve and New Year’s Eve, but when used in a phrase like “the eve of destruction” our judgment of it is likely to change. In the corpus under discussion here, eat is rated 7.04, but that doesn’t consider what’s being eaten and so can not account for lines like “Eat your next door neighbor” (from “Eve of Destruction”).

We could go on and on like this. The point is that the authors of both of the papers didn’t do enough work with their data before drawing conclusions. And they didn’t consider that some of the language in their corpus is part of a multimodal genre where there are other things affecting the meaning of the language used (though technically no language use is devoid of context). Whether or not the lyrics of a song are “positive” or “negative”, the style of singing and the music that they are sung to will highly effect a person’s interpretation of the lyrics’ meaning and emotion. That’s just the way that music works.

This doesn’t mean that any of these songs are positive or negative based on their rating, it means that the system used by the authors of the two papers to rate the positivity or negativity of language seems to be flawed. I would have guessed that a rating system which took words out of context would be fundamentally flawed, but viewing the ratings of the songs in this post is a good way to visualize that. The fact that the two papers were published in reputable journals and picked up by reputable publications, such as the Atlantic and the New York Times, only adds insult to injury for the field of linguistics.

You can see a table of the songs I looked at for this post below and an spreadsheet with the ratings of the lyrics is here. I calculated the positivity ratings by averaging the scores for the word tokens in each song, rather than the types.

(By the way, Tupac is rated 4.76. It’s a good thing his attitude was fuck it ‘cause motherfuckers love it.)

Song Positivity score (1–9)
“Happy” by Pharrell Williams 5.70
“Tears in Heaven” by Eric Clapton 5.67
“You Were Always on My Mind” by Willie Nelson 5.63
“Cat’s in the Cradle” by Harry Chapin 5.55
“Closer” by NIN 5.46
“Mad World” by Gary Jules and Michael Andrews 5.39
“Somebody that I Used to Know” by Gotye feat. Kimbra 5.33
“Waitin’ for a Superman” by The Flaming Lips 5.28
“Milkshake” by Kelis 5.24
“Where Did You Sleep Last Night” by Nirvana 5.24
“Butterfly” by Weezer 5.23
“Eve of Destruction” by Barry McGuire 4.93
“Blitzkrieg Bop” by The Ramones 4.82

 

Footnotes

[1] Also, be aware that listening to these songs while watching their music videos has an effect on the way you interpret them. (Click here to go back up.)

References

Isabel M. Kloumann, Christopher M. Danforth, Kameron Decker Harris, Catherine A. Bliss, Peter Sheridan Dodds. 2012. “Positivity of the English Language”. PLoS ONE. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0029484

Dodds, Peter Sheridan, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, and Christopher M. Danforth. 2015. “Human language reveals a universal positivity bias”. PNAS 112:8. http://www.pnas.org/content/112/8/2389

Huxley, Martin. 1997. Nine Inch Nails. New York: St. Martin’s Griffin.

Last week I wrote a post called “If you’re not a linguist, don’t do linguistics”. This got shared around Twitter quite a bit and made it to the front page of r/linguistics, so a lot of people saw it. Pretty much everyone had good insight on the topic and it generated some great discussion. I thought it would be good to write a follow-up to flesh out my main concerns in a more serious manner (this time sans emoticons!) and to address the concerns some people had with my reasoning.

The paper in question is by Dodds et al. (2015) and it is called “Human language reveals a universal positivity bias”. The certainty of that title is important since I’m going to try to show in this post that the authors make too many assumptions to reliably make any claims about all human language. I’m going to focus on the English data because that is what I am familiar with. But if anyone who is familiar with the data in other languages would like to weigh in, please do so in the comments.

The first assumption made by the authors is that it is possible to make universal claims about language using only written data. This is not a minor issue. The differences between spoken and written language are many and major (Linell 2005). But dealing with spoken data is difficult – it takes much more time and effort to collect and analyze than written data. We can argue, however, that even in highly literate societies, the majority of language use is spoken – and spoken language does not work like written language. This is an assumption that no scholar should ever make. So any research which makes claims about all human language will therefore have to include some form of spoken data. But the data set that the authors draw from (called their corpus) is made from tweets, song lyrics, New York Times articles and the Google Books project. Tweets and song lyrics, let alone news articles or books, do not mimic spoken language in an accurate way. For example, these registers may include the same words as human speech, but certainly not in the same proportion. Written language does not include false starts, nor does it include repetition or elusion in near the same way that spoken language does. Anyone who has done any transcription work will tell you this.

The next assumption made by the authors is that their data is representative of all human language. Representativeness is a major issue in corpus linguistics. When linguists want to investigate a register or variety of language, they build a corpus which is representative of that register or variety by taking a large enough and balanced sample of texts from that register. What is important here, however, is that most linguists do not have a problem with a set of data representing a larger register – so long as that larger register isn’t all human language. For example, if we wanted to research modern English journalism (quite a large register), we would build a corpus of journalism texts from English-speaking countries and we would be careful to include various kinds of journalism – op-eds, sports reporting, financial news, etc. We would not build a corpus of articles from the Podunk Free Press and make claims about all English journalism. But representativeness is a tricky issue. The larger the language variety you are trying to investigate, the more data from that variety you will need in your corpus. Baker (2010: 7) notes that a corpus analysis of one novel is “unlikely to be representative of all language use, or all novels, or even the general writing style of that author”. The English sub-corpora in Dodds et al. exists somewhere in between a fully non-representative corpus of English (one novel) and a fully representative corpus of English (all human speech and writing in English). In fact, in another paper (Dodds et al. 2011), the representativeness of the Twitter corpus is explained as “First, in terms of basic sampling, tweets allocated to data feeds by Twitter were effectively chosen at random from all tweets. Our observation of this apparent absence of bias in no way dismisses the far stronger issue that the full collection of tweets is a non-uniform subsampling of all utterances made by a non-representative subpopulation of all people. While the demographic profile of individual Twitter users does not match that of, say, the United States, where the majority of users currently reside, our interest is in finding suggestions of universal patterns.”. What I think that doozy of a sentence in the middle is saying is that the tweets come from an unrepresentative sample of the population but that the language in them may be suggestive of universal English usage. Does that mean can we assume that the English sub-corpora (specifically the Twitter data) in Dodds et al. is representative of all human communication in English?

Another assumption the authors make is that they have sampled their data correctly. The decisions on what texts will be sampled, as Tognini-Bonelli (2001: 59) points out, “will have a direct effect on the insights yielded by the corpus”. Following Biber (see Tognini-Bonelli 2001: 59), linguists can classify texts into various channels in order to assure that their sample texts will be representative of a certain population of people and/or variety of language. They can start with general “channels” of the language (written texts, spoken data, scripted data, electronic communication) and move on to whether the language is private or published. Linguists can then sample language based on what type of person created it (their age, sex, gender, social-economic situation, etc.). For example, if we made a corpus of the English articles on Wikipedia, we would have a massive amount of linguistic data. Literally billions of words. But 87% of it will have been written by men and 59% of it will have been written by people under the age of 40. Would you feel comfortable making claims about all human language based on that data? How about just all English language encyclopedias?

The next assumption made by the authors is that the relative positive or negative nature of the words in a text are indicative of how positive that text is. But words can have various and sometimes even opposing meanings. Texts are also likely to contain words that are written the same but have different meanings. For example, the word fine in the Dodds et al. corpus, like the rest of the words in the corpus, is just a four letter word – free of context and naked as a jaybird. Is it an adjective that means “good, acceptable, or satisfactory”, which Merriam-Webster says is sometimes “used in an ironic way to refer to things that are not good or acceptable”? Or does it refer to that little piece of paper that the Philadelphia Parking Authority is so (in)famous for? We don’t know. All we know is that it has been rated 6.74 on the positivity scale by the respondents in Dodds et al. Can we assume that all the uses of fine in the New York Times are that positive? Can we assume that the use of fine on Twitter is always or even mostly non-ironic? On top of that, some of the most common words in English also tend to have the most meanings. There are 15 entries for get in the Macmillan Dictionary, including “kill/attack/punish” and “annoy”. Get in Dodds et al. is ranked on the positive side of things at 5.92. Can we assume that this rating carries across all the uses of get in the corpus? The authors found approximately 230 million unique “words” in their Twitter corpus (they counted all forms of a word separately, so banana, bananas, b-a-n-a-n-a-s! would be separate “words”; and they counted URLs as words). So they used the 50,000 most frequent ones to estimate the information content of texts. Can we assume that it is possible to make an accurate claim about how positive or negative a text is based on nothing but the words taken out of context?

Another assumption that the authors make is that the respondents in their survey can speak for the entire population. The authors used Amazon’s Mechanical Turk to crowdsource evaluations for the words in their sub-corpus. 60% of the American people on Mechanical Turk are women and 83.5% of them are white. The authors used respondents located in the United States and India. Can we assume that these respondents have opinions about the words in the corpus that are representative of the entire population of English speakers? Here are the ratings for the various ways of writing laughter in the authors’ corpus:

Laughter tokens Rating
ha 6
hah 5.92
haha 7.64
hahah 7.3
hahaha 7.94
hahahah 7.24
hahahaha 7.86
hahahahaha 7.7
ha 6
hee 5.4
heh 5.98
hehe 6.48
hehehe 7.06

And here is a picture of a character expressing laughter:

Pictured: Good times. Credit: Batman #36, DC Comics, Scott Snyder (wr), Greg Capullo (p), Danny Miki (i), Fco Plascenia (c), Steve Wands (l).

Pictured: Good times. Credit: Batman #36, DC Comics, Scott Snyder (wr), Greg Capullo (p), Danny Miki (i), Fco Plascenia (c), Steve Wands (l).

Can we assume that the textual representation of laughter is always as positive as the respondents rated it? Can we assume that everyone or most people on Twitter use the various textual representations of laughter in a positive way – that they are laughing with someone and not at someone?
Finally, let’s compare some data. The good people at the Corpus of Contemporary American English (COCA) have created a word list based on their 450 million word corpus. The COCA corpus is specifically designed to be large and balanced (although the problem of dealing with spoken language might still remain). In addition, each word in their corpus is annotated for its part of speech, so they can recognize when a word like state is either a verb or a noun. This last point is something that Dodds et al. did not do – all forms of words that are spelled the same are collapsed into being one word. The compilers of the COCA list note that “there are more than 140 words that occur both as a noun and as a verb at least 10,000 times in COCA”. This is the type/token issue that came up in my previous post. A corpus that tags each word for its part of speech can tell the difference between different types of the “same” word (state as a verb vs. state as a noun), while an untagged corpus treats all occurrences of state as the same token. If we compare the 10,000 most common words in Dodds et al. to a sample of the 10,000 most common words in COCA, we see that there are 121 words on the COCA list but not the Dodds et al. list (Here is the spreadsheet from the Dodds et al. paper with the COCA data – pnas.1411678112.sd01 – Dodds et al corpus with COCA). And that’s just a sample of the COCA list. How many more differences would there be if we compared the Dodds et al. list to the whole COCA list?

To sum up, the authors use their corpus of tweets, New York Times articles, song lyrics and books and ask us to assume (1) that they can make universal claims about language despite using only written data; (2) that their data is representative of all human language despite including only four registers; (3) that they have sampled their data correctly despite not knowing what types of people created the linguistic data and only including certain channels of published language; (4) that the relative positive or negative nature of the words in a text are indicative of how positive that text is despite the obvious fact that words can be spelled the same and still have wildly different meanings; (5) that the respondents in their survey can speak for the entire population despite the English-speaking respondents being from only two subsets of two English-speaking populations (USA and India); and (6) that their list of the 10,000 most common words in their corpus (which they used to rate all human language) is representative despite being uncomfortably dissimilar to a well-balanced list that can differentiate between different types of words.

I don’t mean to sound like a Negative Nancy and I don’t want to trivialize the work of the authors in this paper. The corpus that they have built is nothing short of amazing. The amount of feedback they got from human respondents on language is also impressive (to say the least). I am merely trying to point out what we can and can not say based on the data. It would be nice to make universal claims about all human language, but the fact is that even with millions and billions of data points, we still are not able to do so unless the data is representative and sampled correctly. That means it has to include spoken data (preferably a lot of it) and it has to be sampled from all socio-economic human backgrounds.

Hat tip to the commenters on the last post and the redditors over at r/linguistics.

References

Dodds, Peter Sheridan, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, and Christopher M. Danforth. 2015. “Human language reveals a universal positivity bias”. PNAS 112:8. http://www.pnas.org/content/112/8/2389

Dodds, Peter Sheridan, Kameron Decker Harris, Isabel M. Koumann, Catherine A. Bliss, Christopher M. Danforth. 2011. “Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter”. PLOS One. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026752#abstract0

Baker, Paul. 2010. Sociolinguistics and Corpus Linguistics. Edinburgh: Edinburgh University Press. http://www.ling.lancs.ac.uk/staff/paulb/socioling.htm

Linell, Per. 2005. The Written Language Bias in Linguistics. Oxon: Routledge.

Mair, Christian. 2015. “Responses to Davies and Fuchs”. English World-Wide 36:1, 29–33. doi: 10.1075/eww.36.1.02mai

Tognini-Bonelli, Elena. 2001. Studies in Corpus Linguistics, Volume 6: Corpus Linguistics as Work. John Benjamins. https://benjamins.com/#catalog/books/scl.6/main

A paper recently published in PNAS claims that human language tends to be positive. This was news enough to make the New York Times. But there are a few fundamental problems with the paper.

Linguistics – Now with less linguists!

The first thing you might notice about the paper is that it was written by mathematicians and computer scientists. I can understand the temptation to research and report on language. We all use it and we feel like masters of it. But that’s what makes language a tricky thing. You never hear people complain about math when they only have a high-school-level education in the subject. The “authorities” on language, however, are legion. My body has, like, a bunch of cells in it, but you don’t see me writing papers on biology. So it’s not surprising that the authors of this paper make some pretty basic errors in doing linguistic research. They should have been caught by the reviewers, but they weren’t. And the editor is a professor of demography and statistics, so that doesn’t help.

Too many claims and not enough data

The article is titled “Human language reveals a universal positivity bias” but what the authors really mean is “10 varieties of languages might reveal something about the human condition if we had more data”. That’s because the authors studied data in 10 different languages and they are making claims about ALL human languages. You can’t do that. There are some 6,000 languages in the world. If you’re going to make a claim about how every language works, you’re going to have to do a lot more than look at only 10 of them. Linguists know this, mathematicians apparently do not.

On top of that, the authors don’t even look at that much linguistic data. They extracted 5,000–10,000 of the most common words from larger corpora. Their combined corpora contain the 100,000 most common words in each of their sub-corpora. That is woefully inadequate. The Brown corpus contains 1 million words and it was made in the 1960s. In this paper, the authors claim that 20,000 words are representative of English. That is, not 20,000 different words, but the 5,000 most common words in each of their English sub-corpora. So 5,000 words each from Twitter, the New York Times, music lyrics, and the Google Books Project are supposed to represent the entire English language. This is shocking… to a linguist. Not so much to mathematicians, who don’t do linguistic research. It’s pretty frustrating, but this paper is a whole lotta ¯\_(ツ)_/¯.

To complete the trifecta of missing linguistic data, take a look at the sources for the English corpora:

Corpus Word count
English: Twitter 5,000
English: Google Books Project 5,000
English: The New York Times 5,000
English: Music lyrics 5,000

If you want to make a general claim about a language, you need to have data that is representative of that language. 5,000 words from Twitter, the New York Times, some books and music lyrics does not cut it. There are hundreds of other ways that language is used, such as recipes, academic writing, blogging, magazines, advertising, student essays, and stereo instructions. Linguists use the terms register and genre to refer to these and they know that you need more than four if you want your data to be representative of the language as a whole. I’m not even going to ask why the authors didn’t make use of publicly available corpora (such as COCA for English). Maybe they didn’t know about them. ¯\_(ツ)_/¯

Say what?

Speaking of registers, the overwhelmingly most common way that language is used is speech. Humans talking to other humans. No matter how many written texts you have, your analysis of ALL HUMAN LANGUAGE is not going to be complete until you address spoken language. But studying speech is difficult, especially if you’re not a linguist, so… ¯\_(ツ)_/¯

The fact of the matter is that you simply cannot make a sweeping claim about human language without studying human speech. It’s like doing math without the numeral 0. It doesn’t work. There are various ways to go about analyzing human speech, and there are ways of including spoken data into your materials in order to make claims about a language. But to not perform any kind of analysis of spoken data in an article about Language is incredibly disingenuous.

Same same but different

The authors claim their data set includes “global coverage of linguistically and culturally diverse languages” but that isn’t really true. Of the 10 languages that they analyze, 6 are Indo-European (English, Portuguese, Russian, German, Spanish, and French). Besides, what does “diverse” mean? We’re not told. And how are the cultures diverse? Because they speak different languages and/or because they live in different parts of the world? ¯\_(ツ)_/¯

The authors also had native speakers judge how positive, negative or neutral each word in their data set was. A word like “happy” would presumably be given the most positive rating, while a word like “frown” would be on the negative end of the scale, and a word like “the” would be rated neutral (neither positive nor negative). The people ranking the words, however, were “restricted to certain regions or countries”. So, not only are 14,000 words supposed to represent the entire Portuguese language, but residents of Brazil are rating them and therefore supposed to be representative of all Portuguese speakers. Or, perhaps that should be residents of Brazil with internet access.

[Update 2, March 2: In the following paragraph, I made some mistakes. I should not have said that ALL linguists believe that rating language is an notoriously poor way of doing an analysis. Obviously I can’t speak for all the linguists everywhere. That would be overgeneralizing, which is kind of what I’m criticizing the original paper for. Oops! :O I also shouldn’t have tied the rating used in the paper and tied it to grammaticality judgments. Grammaticality judgments have been shown to be very, very consistent for English sentences. I am not aware of whether people tend to be as consistent when rating words for how positive, negative, or neutral they are (but if you are, feel free to post in the comments). So I think the criticism still stands. Some say that the 384 English-speaking participants is more than enough to rate a word’s positivity. If people rate words as consistently as they do sentences, then this is true. I’m not as convinced that people do that (until I see some research on it), but I’ll revoke my claim anyway. Either way, the point still stands – the positivity of language does not lie in the relative positive or negative nature of the words in a text (the next point I make below). Thanks to u/rusoved, u/EvM and u/noahpoah on reddit for pointing this out to me.] There are a couple of problems with this, but the main one is that having people rate language is a notoriously poor way of analyzing language (notorious to linguists, that is). If you ask ten people to rate the grammaticality of a sentence on a scale from 1 to 10, you will get ten different answers. I understand that the authors are taking averages of the answers their participants gave, but they only had 384 participants rating the English words. I wouldn’t call that representative of the language. The number of participants for the other languages goes down from there.

A loss for words

A further complication with this article is in how it rates the relative positive nature of words rather than sentences. Obviously words have meaning, but they are not really how humans communicate. Consider the sentence Happiness is a warm gun. Two of the words in that sentence are positive (happiness and warm), while only one is negative (gun). This does not mean it’s a positive sentence. That depends on your view of guns (and possibly Beatles songs). So it is potentially problematic to look at how positive or negative the words in a text are and then say that the text as a whole (or the corpus) presents a positive view of things.

Lost in Google’s Translation

The last problem I’ll mention concerns the authors’ use of Google Translate. They write

We now examine how individual words themselves vary in their average happiness score between languages. Owing to the scale of out corpora, we were compelled to use an online service, choosing Google Translate. For each of the 45 language pairs, we translated isolated words from one language to the other and then back. We then found all word pairs that (i) were translationally stable, meaning the forward and back translation returns the original word, and (ii) appeared in our corpora in each language.

This is ridiculous. As good as Google Translate may be in helping you understand a menu in another country, it is not a good translator. Asya Pereltsvaig writes that “Google Translate/Conversation do not translate. They match. More specifically, they match (bits of) the original text with best translations, where ‘best’ means most frequently found in a large corpus such as the World Wide Web.” And she has caught Google Translate using English as an intermediate language when translating from one language to another. That means that when going between two languages that are not English (say French and Russian), Google Translate will first translate the word into English and then into target language. This represents a methodological problem for the article in that using the online Google Translate actually makes their analysis untrustworthy.

 

It’s unfortunate that this paper made it through to publication and it’s a shame that it was (positively) reported on by the New York Times. The paper should either be heavily edited or withdrawn. I’m doubtful that will happen.

 

Update: In the fourth paragraph of this post (the one which starts “On top of that…”), there was some type/token confusion concerning the corpora analyzed. I’ve made some minor edits to it to clear things up. Hat tip to Ben Zimmer on Twitter for pointing this out to me.

Update (March 17, 2015): I wrote a more detailed post (more references, less emoticons) on my problems with the article in question. You can find that here.

Peter Friederici, in a recent article in the Bulletin of the Atomic Scientists, reminds us that “the language used to characterize the climate problem is far more important than is generally recognized”. Mr. Friederici’s article links to a CBS piece which states things more bluntly:

If you’re trying to get someone to care about the way the environment is changing, you might want to refer to it as “global warming,” rather than “climate change,” according to a new study

The idea is that global warming sounds more dire than climate change. Global warming is more likely to inspire people to do something drastic or force their government to take major steps, but climate change requires only minor steps to solve. So tree-hugging liberals will want to use global warming to fire up their base, while the term climate change is more amenable to the conservative approach of letting the free market sort things out. This idea has been floating around for just over ten years. It was inspired by the American political pollster Frank Luntz. While consulting the Republican Party in 2002, Luntz wrote a memo to President George W. Bush’s staff which read in part:

It’s time for us to start talking about “climate change” instead of global warming […] “Climate change” is less frightening than “global warming.” […] While global warming has catastrophic connotations attached to it, climate change suggests a more controllable and less emotional challenge.

Similar ideas about the differences between these seemingly synonymous terms have been raised in other news outlets. The two articles above also report the results of the Yale Project on Climate Change Communication, which found that:

the term “global warming” is associated with greater public understanding, emotional engagement, and support for personal and national action than the term “climate change.” […] Our findings strongly suggest that the terms global warming and climate change are used differently and mean different things in the minds of many Americans.

The report also says that:

Americans are four times more likely to say they hear the term global warming in public discourse than climate change.

The crucial element missing from all of these news articles and reports is any actual data about how often these terms are used. So let’s see if we can find that out.

Easier said than done

There are a few things to think about before we get started with the data. First, although Luntz’s recommendations were informed by his discussions with voters, we don’t know if President Bush or the Republican party actually listened to him. Reporting that Republicans were advised to use climate change instead of global warming doesn’t mean that they actually did so. Perhaps the reason for this is that it seems Bush didn’t use either term. He didn’t use them in his debates with Democratic presidential candidate John Kerry and he only used the term global climate change once in both his 2007 and 2008 State of the Union addresses:

And these technologies will help us be better stewards of the environment, and they will help us to confront the serious challenge of global climate change. – George W. Bush, State of the Union 2007

The United States is committed to strengthening our energy security and confronting global climate change. – George W. Bush, State of the Union 2008

So it’s hard to report on something happening when it didn’t happen. Ironically, Kerry used global warming once in his debate in St. Louis and twice in Coral Gables, so maybe he also got Luntz’s memo?

The second thing to think about is that reporting that Americans claim they hear global warming more often that climate change doesn’t mean that they actually do. People are really bad at accurately reporting things like this. For example, before I present the data to you, I want you to ask yourself which term you think is more common on various American news outlets. Based on the information above, do you think Fox News uses global warming more often or climate change? How about NPR and MSNBC? We’ll see whether the numbers back you up in a bit.

Finally, I’m going to take my data from the Corpus of Contemporary American English (COCA), which is a 450 million word database of speech and writing that is “suitable for looking at current, ongoing changes in the language”. I wrote about why it is better to use corpora like COCA instead of the Google N-gram viewer here.

Crunching the numbers

Let’s first see how common each of these terms are. COCA allows us to split up our data into different genres depending on where the texts come from – Spoken, Fiction, Magazine, Academic, and Newspaper – so we can look at only the genres we are interested in. For the purposes of this blog post, I’m going to look at news texts, magazine texts and spoken language data. We could also look at academic genres, but that might be problematic since according to the CBS article “Scientists have largely started using the term climate change because it more accurately describes the myriad changes to the climate […] while global warming refers to a single phenomenon.” So academics are very particular in the terms they use (seriously, we write whole sections of our theses just to define our terms and we love doing it).

Climate change
SECTION ALL SPOKEN MAGAZINE NEWSPAPER
FREQ 3136 806 1510 820
PER MIL 6.77 8.43 15.8 8.94

 

Climate change
SECTION 1990-1994 1995-1999 2000-2004 2005-2009 2010-2012
FREQ 156 174 390 1541 883
PER MIL 1.5 1.68 3.79 15.1 17.01

Here we can see the raw count (FREQ) for climate change in the Spoken, Magazine, and Newspaper sections of COCA, as well as for the term in different time periods. This is basically the number of times that the term appears in each section. We also have the frequency per million words (PER MIL), which is a way of normalizing the various sections because they each have a different amount of total words. Looking at this more accurate stat, we can see that climate change is most common in the Magazine genre and that its usage (in all genres taken together) increases over time.

Global warming
SECTION ALL SPOKEN MAGAZINE NEWSPAPER
FREQ 4031 1063 1801 1147
PER MIL 8.68 11.12 18.85 12.51

 

Global Warming
SECTION 1990-1994 1995-1999 2000-2004 2005-2009 2010-2012
FREQ 519 375 763 1854 520
PER MIL 4.99 3.63 7.41 18.17 10.02

Here we have the same stats for global warming. They show that the term is more common in all of the genres and time periods, except for 2010–2012, when the normalized frequency drops down to 10.02. In the same time period, the frequency for climate change is 17.02. Conservatives are winning!

Not so fast, tiger. We still don’t know who is using these words. Remember that global warming only refers to one of the many changes happening to our planet. Maybe those in the media picked up on this and started using climate change where it was more appropriate. So let’s cut up the genres.

Didn’t you get the memo?

So President Bush didn’t use climate change or global warming. But perhaps this idea that the opposing sides of the debate should use different terms has filtered down to the talking heads on TV. If we remember the idea that people believe they hear global warming more often than climate change in public discourse, we can look at the Spoken section of the corpus to check this claim. Here is where you can check your guesses about which term is more common on various news outlets. Below are the frequencies for climate change in the different sections of the Spoken corpus.

Climate change
Spoken # PER MILLION # TOKENS # WORDS
FOX 19.51 123 6,302,918
NPR 18.45 321 17,399,724
PBS 12.1 80 6,612,202
CNN 5.37 111 20,656,861
NBC 4.41 28 6,348,632
MSNBC 3.68 3 814,156
CBS 3.41 44 12,887,290
ABC 3.29 51 15,514,463
Indep 0.23 1 4,343,343

So climate change occurs about 19 times per million words on Fox News and about 3 times per million words on MSNBC. #TOKENS refers to the actual number of times the term appears in each subsection, while # WORDS refers to how many words make up each subsection.

Here are the same stats for global warming:

Global warming
Spoken # PER MILLION # TOKENS # WORDS
FOX 36.33 229 6,302,918
MSNBC 31.93 26 814,156
NPR 17.82 310 17,399,724
PBS 13.16 87 6,612,202
CNN 8.37 173 20,656,861
ABC 6.96 108 15,514,463
Indep 6.22 27 4,343,343
CBS 4.03 52 12,887,290
NBC 3.15 20 6,348,632

Interestingly enough, Fox news tops both lists. What’s strange, though, is that we should have expected a conservative/Republican news site like Fox to use the climate change much more than global warming, but that is not the case (they really are fair and balanced!). NPR and PBS use the terms with almost equal frequency, while the commie pinkos over at MSNBC use global warming at a much higher rate than climate change (they’re coming for your guns too!).

Everybody chill

But hold on a second. What do these numbers really tell us? First, in terms of the spoken data in COCA, global warming really is more frequent. That doesn’t account for all of the language people hear every day, but it is representative of the public discourse they are likely to hear. Only NBC used climate change more often, and even then only barely.

While we can say that the issue of climate change or global warming seems to feature more prominently on Fox News compared to CBS or ABC, we don’t really have a way of saying how these terms are used on any channel.

For that we have to look at the concordances (the passages from the texts where our search terms appear). There we can see things like Fox News’s Sean Hannity saying:

Al Gore has a financial stake in spreading global warming hysteria…
 
Al Gore’s friends in the liberal media jumped on the global warming bandwagon…
 
And finally tonight, Al Gore’ s global warming manipulation isn’t just affecting food prices…

Could it be possible that Fox News uses global warming in its scare tactics and/or liberal bashing?

We can compare this with Hannity’s use of climate change:

the University of Alaska at Fairbanks used 50,000 stimulus dollars to send 11 students to Copenhagen for the failed climate change conference…
 
Jones findings have been used for years to bolster the U.N.’s findings on climate change….

But this is probably nitpicking and it misses the larger point. The words around global warming and climate change say more about their meaning than anything else. We know how Sean Hannity feels about climate change. He says so right here:

HANNITY: Carol, I love you. You’re a great liberal. You defend your side well. If it is hot, it is global warming. If it is cold, it is global warming. If it rains, it’s global warming. If it hails, it is global warming.
 
CAROLINE HELDMAN: Gingrich and Romney are both saying that climate change is happening, are you behind them on this one?
 
HANNITY: I disagree. I don’t think the science is conclusive. Now, I do believe man has an impact on the environment. I want clean air. I want clean water. I want to leave a good planet for our kids and grandkids. But I’m not going to buy lies that are perpetrated by people […] with a political agenda.

I can’t tell if that last line was tongue in cheek, but Hannity seems to opt for another message that was in Luntz’s memo and stress that the scientific jury is still out on global warming. This has also become a conservative talking point. Obviously, the science is firmly in favor of man-made climate change, but even if we replace climate change with global warming in any of the quotes from Sean Hannity, the meaning will not change. The same goes for any of the news outlets above because the difference between these two terms is not that vast. We can all think of two terms which roughly mean the same thing, but are not interchangable in the same way that climate change and global warming are also not. (To his credit, Frank Luntz realizes the complex nature of language and his advice to President Bush on how to talk about environmental issues was nuanced and erudite.)

The idea here is to make sure not to put the cart in front of the horse. Frank Luntz advised President Bush to start using climate change instead of global warming as one way to swing the environmental issue into the Republicans’ favor. This idea would presumably trickle down to other Republicans in the government and to members of the media sympathetic to Republican views. So the first step would be to look at whether the frequency of global warming rose above that of climate change or not. Judging from the data in COCA, I would say this is not what happened. Global warming was already more common than climate change before Luntz issued his memo to President Bush, and both terms were on the rise. Luntz’s advice could certainly have been a contributing factor to climate change’s gain in usage, but it is certainly not the only one. And global warming is still more common on major American news outlets.

I don’t doubt that the terms have a difference in meaning for many people. No matter how small, there is always some semantic difference between even the closest of synonyms. These differences in meanings are based on many different factors, such as the hearer’s education, social background, nationality, familiarity with the speaker, and the context of the situation. What this boils down to is that it doesn’t matter what we call global warming. Focusing on who uses what term misses the point, even if people have more emotional reactions to one term or the other. Climate change is happening and all that matters is that we do something about it.

In the next post, I’ll do a more in depth quantitative analysis of President Bush’s use of these terms. I’ll also look at the problems with reporting Google Search statistics in research on language, which was a method employed by the Yale Project on Climate Change Communication (the same project that studied people’s feelings about the terms).

Follow

Get every new post delivered to your Inbox.

Join 78 other followers

%d bloggers like this: