Feeds:
Posts
Comments

Posts Tagged ‘linguistics’

NPR’s Code Switch did an interview about language a few months ago and it stayed on my mind because of how bad it was. I gave it a re-listen and I’d like to point out just why it’s so bad. You can listen to the episode below. It’s episode 42 and it’s called “Not-So-Simple Questions From Code Switch Listeners”. The interview in question starts at the 14:47 mark. The hosts, Gene Demby and Shereen Marisol Meraji, talk to Brent Blair about what it sounds like to be American. I couldn’t find a transcript of the interview, so I made my own, which you can find here. I’ll summarize Blair’s points below and briefly point out why they are wrong. The linguistics behind each of the topics that I discuss below is complex, but I will try to keep things simple in order to keep things short.

1. We understand this quote unquote “American dialect” or “Received American Pronunciation” based on culture and media: what sells.

No, we don’t. We (I mean linguists, people who study dialects) understand American dialects (plural) based on how the dialects sound. Non-linguists (and linguists when they’re not studying dialects) understand dialects through an array of socio-economic and linguistic factors.

“Received American Pronunciation” is not a thing. Blair is mixing up General American and Received Pronunciation, the accents with the highest prestige in the US and the UK, respectively. Many national newscasters in the US use General American on air (for example, Brian Williams). In the UK, Received Pronunciation is used by the Royal Family and members of parliament (with exceptions, of course). Mixing up the names of these two dialects is so incredibly basic that it’s hard to believe someone would make it. It’s like someone talking about the Boston Yankees baseball team. Or the band Led Sabbath. Or President Abraham E. Lee. The term General American is not without its problems.

2. What we understand as the American dialect comes from the West Coast, specifically Hollywood, and what Hollywood has considered the standard American dialect. This dialect is “vanilla” – its features do not include “twisty or harsh R sounds or twangy stuff or dropped AH” (quotes from Blair).

It’s probably not surprising that a theater professor would think that Hollywood is responsible for our thoughts on American dialects. Blair is almost correct on this – the dialect used in many popular movies is indeed General American. It doesn’t come from Hollywood, though. The dialect known as General American comes from the eastern part of the US, and it is often considered the dialect of the Midwestern region of the United States, not California. General American is believed to not have any regional or ethnic features, but obviously this is nonsense. It is a mish-mash of various dialects. It’s also (as far as I can tell) not really used in dialect studies anymore.

Map of the dialects of North America. From The Atlas of North American English by Labov, Ash and Boberg (2006; Map 11.15).

Map of the dialects of North America. From The Atlas of North American English by Labov, Ash and Boberg (2006; Map 11.15).

The terms “vanilla”, “twisty”, “harsh R”, “twangy”, and “dropped AH” are not used in dialect studies. These terms are problematic. For example, the dialect that Blair is calling standard, the one from Hollywood, uses an R sound. This is one of the ways that linguists describe dialects: whether they include a post-vocalic R or not. Linguists use the terms rhotic to describe dialects which pronounce the R when it comes after a vowel, and non-rhotic to describe dialects which do not pronounce post-vocalic Rs. The Boston dialect is classically non-rhotic, with Hahvahd Yahd (Harvard Yard) being a common term used by people imitating the dialect (Notice that the Boston dialect doesn’t drop all of its Rs, just the ones which come after a vowel and before a consonant. No one in Boston goes to watch the Pat_iots or B_uins play). So, do rhotic dialects have “harsh R sounds”? I don’t know because I don’t know what the hell that means. What does “twangy” mean? What dialect sounds “twangy”? Does Nelly sound “Twangy” (he’s from St. Louis)? Does Taylor Swift (she’s from eastern Pennsylvania)? Can I say that this whole interview sounds “twangy” or should I use the more technical term: shitty?

3. Regionalisms in dialects are disappearing rapidly. Today a person from Atlanta, Georgia, sounds like a person from California. You can’t tell the difference between people from Houston, Chicago and New York. On the contrary, dialects in rural areas are still diverse.

Blair couldn’t be more wrong about this. Literally the first page of William Labov’s Dialect Diversity in America says “People tend to believe that dialect differences in American English are disappearing, especially given our exposure to a fairly uniform broadcast standard in the mass media. One can find this point of view in almost any discussion of American dialects […] This overwhelming common opinion is simply and jarringly wrong.” THE FIRST GODDAMN PAGE. Of a book that is sure to turn up in any Amazon or Google search on dialects in America. There is no way that Blair’s name showed up in a Google search of dialects in America.

Even though the Code Switch hosts didn’t need to read past the second page of Labov’s book to get better info than Blair gave them, if they had made it to page 35, they would have read “The dialects of Chicago, Philadelphia, Pittsburgh, and Los Angeles are now more different from each other than they were 50 or 100 years ago […] On the other hand, dialects of many smaller cities have receded in favor of the new regional patterns.” Again, exactly the opposite of what Blair told them. Labov also does something which Blair does not: he backs up his claims with (decades of) research. I guess they do linguistics differently in the field of theater studies.

As if that wasn’t enough, here’s a story from NPR about dialects NOT disappearing!

4. Globalization, commercialism, and our careers have made us say “We all want to sound the same”.

K.

5. This “vanilla” Californian dialect, or this blending of dialects, and/or the disappearance of regionalisms is not due to class or race, but access and power. (It’s hard to tell what they are talking about here. They use the term “placeless”.)

Things kind of break down around point 5. Blair has dug himself into a hole and he can’t get out. He talks about how people of color are only allowed to use the Vanilla-fornian dialect based on the culture that is employing them and their relationship to systems of power, but it is unclear what he means and he is unable to explain. He only offers an immediate anecdote – the interviewer Meraji is able to say “Latino” with a Puerto Rican accent on NPR, so maybe she would allow herself to use more Spanish on air in the future. But Spanish isn’t a dialect. Meraji would allow herself to speak Spanish on NPR if she knew her audience would understand her. Blair wraps it all up with something truly bizarre when he says, “So for me, when we’re accent stereotyping, it just means we haven’t fallen in love enough with that community to understand its diversity and its complexity”. I don’t know what the hell this guy is talking about.

Pointing fingers

So who’s at fault here? I think partial blame falls on both sides.

First, Blair should be blamed for not saying no to the interview. If NPR called me up and asked me to talk about theater studies, I would say no. Because I’m not a theater scholar or professional. If someone called you up and said “Hey, we want to talk about theoretical mathematics on the radio,” would you say “Sure! I took math in high school. Let’s do this.”? No, of course you wouldn’t. But they called Blair up and he said, “Ummmm, I speak a language. Get me on the phone!” And then he proved that he knows about as much about language and dialects as I do about theater studies. It’s not that Blair can’t know anything about dialects in America, it’s that he showed he doesn’t know anything about dialects in America. If he had gotten everything right, I wouldn’t be writing this blog post.

Some of the blame also goes to the people at Code Switch though. If they wanted to talk about language and dialects, why didn’t they call a linguist? Why did they think calling a theater professor, who as far as I can tell has not written anything on language, would be ok? In an earlier part of this episode, the hosts have a discussion about the magical negro and they talk to Ebony Elizabeth Thomas, a professor and researcher who has published on representations of people of color in various media. Thomas is at the University of Pennsylvania, the same university as Labov, who I quoted above. She literally could have transferred them over to his office. Or they could have talked to Walt Wolfram or Natalie Schilling or John Baugh. Any of these people would have been far better than Blair.

Ok, I’ve been pretty hard on everyone in this interview. You may be thinking, jeez, this guy just doesn’t like it when people talk about language. That’s not the case. I don’t like it when prominent news organizations talk about language and get it so wrong (I see you, The New Yorker). If you want to hear a really great interview on language and linguistics, go listen to this Top of Mind interview (download it here). The host, Julie Rose, and the guests talk about filler words (um, uh, you know, etc.), which is – like dialects – a linguistic topic with a divide between what the public thinks and what linguists have discovered. To discuss this topic, the host invited two linguists who have researched filler words, Alexandra D’Arcy and Jena Barchas-Lichtenstein. I hope other interviewers listen to this and learn how to discuss language on air.

If you are interested in learning more about dialects in America and/or dialect discrimination, follow the links behind the researchers’ names in the previous two paragraphs. Most of them have written books and articles aimed at the general public. Walt Wolfram even has a movie about African American speech coming out and it sounds amazing. I’m not saying that all of the things you will read are going to be positive – discrimination based on language happens and it is terrible. But the research put out by these and other linguists is fascinating and it can actually do what the NPR Code Switch interview attempted to do: make you more informed about language.

Hat tip to Nicole Holliday on Twitter for pointing me to this Code Switch episode. Holliday would also have been good for this interview.

Update 14 June 2017:

Almost immediately after posting this article and sharing it on Twitter, Gene Demby reached out. Gene is one of the hosts of NPR’s Code Switch. According to him, this episode “was the source of much consternation”. Gene wanted to talk to a linguist but was overruled by an editor. He has also said the Code Switch will do better in the future and that they have an episode about African American Vernacular English (AAVE) coming up. I’d like to thank Gene for clearing things up and I look forward to that episode.

Also related to this post, Kevin Calcamp reached out to say that Blair’s views are not representative of the study of linguistics in theater and performance studies. Kevin says that theater/performance scholars have a good understanding of linguistics. I believe him. He also pointed out the complicated nature and the various ways of incorporating dialects into theater/performance studies (follow the tweet below to see more). Thanks, Kevin, for explaining things.

Read Full Post »

The following is a sentence on an exam I gave my student this semester. It’s a lyric from the totally awesome band The Go-Go’s (who are too punk rock to care about using your lame apostrophes correctly). Read it and decide which part of speech you think sealed is: verb or adjective?

In the jealous games people play, our lips are sealed.

I first thought that sealed is clearly an adjective and that it functions as the subject complement of the sentence (a subject complement is an element required by copular verbs, such as be and seem, which does not encode a different kind of participant to the subject in the phrase in the way that an object does). But many of my students analyzed it as a verb. This calls for some weekend grammar research (while listening to the Go-Go’s of course)!

On the exam, students had to mark the function (subject, predicate, object, etc.) of each clause in the sentence. In the grammar that we’re using (English Grammar: A University Course, 2nd ed., 2006, by Downing and Locke), only verb phrases can be included in the predicate. This means that if sealed is a verb, the phrase consists of only a subject (Our lips) and a predicate (are sealed).

Two dictionaries list sealed as an adjective: the OED and Macmillan Dictionary. The OED’s citation which mirrors this construction is a bit out of date though. It comes from the 1611 printing of the King James Bible: And the vision of all is become vnto you, as the wordes of a booke that is sealed. Macmillan Dictionary only offers “a sealed box/bag/envelope” as an example. Four other dictionaries (Merriam-Webster’s, Dictionary.com, Oxford Learner’s Dictionary, and Oxford Dictionaries) do not list sealed as an adjective, only as a transitive verb (i.e. it needs an object). Strangely, Oxford Learner’s Dictionary has this example sentence under the second entry for seal as a verb:

The organs are kept in sealed plastic bags.

In this case, sealed is definitely an adjective modifying a noun (plastic bags). This must be an oversight by the editors. More importantly, though, is the fact that sealed in Our lips are sealed does not have an object. What gives?

Well, sealed is more of a participial adjective than anything else (some grammars use the terms verbal adjective or attributive verb). It’s an adjective that has been derived from a verb. Participial adjectives look like verbs but they function grammatically like adjectives. I know. Welcome to the Twilight Zone. These are the cases which really show that there are not sharp limits between the parts of speech, but rather very hazy boundaries. Sometimes it is easy to tell whether the word in question is a verb or an adjective. For example:

This is the sealed envelope that you mailed. = adjective

I sealed the envelope with a kiss. = verb

Other times – such as the one under discussion here – things are not so clear cut. Downing & Locke (p. 479) say that “past participles may often have either an adjectival or a verbal interpretation. In The flat was furnished, the participle [furnished] may be understood either as part of a passive verb form or as the adjectival subject complement of the copula was.” This means that sealed could be a passive verb that is simply missing its object. The object is presumably missing because we know that the person who owns the lips is the one who seals them, so it would sound ridiculous to say Our lips are sealed by us (although maybe not as ridiculous as the similar phrase My lips are sealed by me).

I want to argue that sealed is definitely an adjective, but like so much else in linguistics, it is hard to be definite about this. The verb analysis works just as well and sealed might be semantically closer to a verb in that we can think about the sealing of lips as resulting from an action taken. If we compare it to Our lips are chapped there isn’t as clear of an action present, except maybe the action of the weather. But I don’t like talking about verbs as action words.

For what’s it worth, 19 out of 25 people in my Twitter poll said that sealed is an adjective.

On the exam, I accepted both adjective/subject complement and verb/predicator. This made my students happy. Talking about sealed for 20 minutes in class did not make them so happy.

Read Full Post »

In two recent papers, one by Kloumann et al. (2012) and the other by Dodds et al. (2015), a group of researchers created a corpus to study the positivity of the English language. I looked at some of the problems with those papers here and here. For this post, however, I want to focus on one of the registers in the authors’ corpus – song lyrics. There is a problem with taking language such as lyrics out of context and then judging them based on the positivity of the words in the songs. But first I need to briefly explain what the authors did.

In the two papers, the authors created a corpus based on books, New York Times articles, tweets and song lyrics. They then created a list of the 10,000 most common word types in their corpus and had voluntary respondents rate how positive or negative they felt the words were. They used this information to claim that human language overall (and English) is emotionally positive.

That’s the idea anyway, but song lyrics exist as part of a multimodal genre. There are lyrics and there is music. These two modalities operate simultaneously to convey a message or feeling. This is important for a couple of reasons. First, the other registers in the corpus do not work like song lyrics. Books and news articles are black text on a white background with few or no pictures. And tweets are not always multimodal – it’s possible to include a short video or picture in a tweet, but it’s not necessary (Side note: I would like to know how many tweets in the corpus included pictures and/or videos, but the authors do not report that information).

So if we were to do a linguistic analysis of an artist or a genre of music, we would create a corpus of the lyrics of that artist or genre. We could then study the topics that are brought up in the lyrics, or even common words and expressions (lexical bundles or n-grams) that are used by the artist(s). We could perhaps even look at how the writing style of the artist(s) changed over time.

But if we wanted to perform an analysis of the positivity of the songs in our corpus, we would need to incorporate the music. The lyrics and music go hand in hand – without the music, you only have poetry. To see what I mean, take a look at the following word list. Do the words in this list look particularly positive or negative to you?

a

ain’t

all

and

as

away

back

bitch

body

breast

but

butterfly

can

can’t

caught

chasing

comin’

days

did

didn’t

do

dog

down

everytime

fairy

fantasy

for

ghost

guess

had

hand

harm

her

his

i

i’m

if

in

it

looked

lovely

jar

makes

mason

life

live

maybe

me

mean

momma’s

more

my

need

nest

never

no

of

on

outside

pet

pin

real

return

robin

scent

she

sighing

slips

smell sorry

that

the

then

think

to

today

told

up

want

wash

went

what

when

with

withered

woke

would

yesterday

you

you’re

your

If we combine these words as Rivers Cuomo did in his song “Butterfly”, they average out to a positive score of 5.23. Here are the lyrics to that song.

Yesterday I went outside
With my momma’s mason jar
Caught a lovely Butterfly
When I woke up today
And looked in on my fairy pet
She had withered all away
No more sighing in her breast

I’m sorry for what I did
I did what my body told me to
I didn’t mean to do you harm
But everytime I pin down what I think I want
it slips away – the ghost slips away

I smell you on my hand for days
I can’t wash away your scent
If I’m a dog then you’re a bitch
I guess you’re as real as me
Maybe I can live with that
Maybe I need fantasy
A life of chasing Butterfly

I’m sorry for what I did
I did what my body told me to
I didn’t mean to do you harm
But everytime I pin down what I think I want
it slips away – the ghost slips away

I told you I would return
When the robin makes his nest
But I ain’t never comin’ back
I’m sorry, I’m sorry, I’m sorry

Does this look like a positive text to you? Does it look moderate, neither positive nor negative? I would say not. It seems negative to me, a sad song based on the opera Madame Butterfly, in which a man leaves his wife because he never really cared for her. When we include the music into our consideration, the non-positivity of this song is clear.


Let’s take a look at another list. How does this one look?

above

absence

alive

an

animal

apart

are

away

become

brings

broke

can

closer

complicate

desecrate

down

drink

else

every

everything

existence

faith

feel

flawed

for

forest

from

fuck

get

god

got

hate

have

help

hive

honey

i

i’ve

inside

insides

is

isolation

it

it’s

knees

let

like

make

me

my

myself

no

of

off

only

penetrate

perfect

reason

scraped

sell

sex

smell

somebody

soul

stay

stomach

tear

that

the

thing

through

to

trees

violate

want

whole

within

works

you

your

Based on the ratings in the two papers, this list is slightly more positive, with an average happiness rating of 5.46. When the words were used by Trent Reznor, however, they expressed “a deeply personal meditation on self-hatred” (Huxley 1997: 179). Here are the lyrics for “Closer” by Nine Inch Nails:

You let me violate you
You let me desecrate you
You let me penetrate you
You let me complicate you

Help me
I broke apart my insides
Help me
I’ve got no soul to sell
Help me
The only thing that works for me
Help me get away from myself

I want to fuck you like an animal
I want to feel you from the inside
I want to fuck you like an animal
My whole existence is flawed
You get me closer to god

You can have my isolation
You can have the hate that it brings
You can have my absence of faith
You can have my everything

Help me
Tear down my reason
Help me
It’s your sex I can smell
Help me
You make me perfect
Help me become somebody else

I want to fuck you like an animal
I want to feel you from the inside
I want to fuck you like an animal
My whole existence is flawed
You get me closer to god

Through every forest above the trees
Within my stomach scraped off my knees
I drink the honey inside your hive
You are the reason I stay alive

As Reznor (the songwriter and lyricist) sees it, “Closer” is “supernegative and superhateful” and that the song’s message is “I am a piece of shit and I am declaring that” (Huxley 1997: 179). You can see what he means when you listen to the song (minor NSF warning for the imagery in the video). [1]

Nine Inch Nails: Closer (Uncensored) (1994) from Nine Inch Nails on Vimeo.

Then again, meaning is relative. Tommy Lee has said that “Closer” is “the all-time fuck song. Those are pure fuck beats – Trent Reznor knew what he was doing. You can fuck to it, you can dance to it and you can break shit to it.” And Tommy Lee should know. He played in the studio for NIИ and he is arguably more famous for fucking than he is for playing drums.

Nevertheless, the problem with the positivity rating of songs keeps popping up. The song “Mad World” was a pop hit for Tears for Fears, then reinterpreted in a more somber tone by Gary Jules and Michael Andrews. But it is rated a positive 5.39. Gotye’s global hit about failed relationships, “Somebody That I Used To Know”, is rated a positive 5.33. The anti-war and protest ballad “Eve of Destruction”, made famous by Barry McGuire, rates just barely on the negative side at 4.93. I guess there should have been more depressing references besides bodies floating, funeral processions, and race riots if the song writer really wanted to drive home the point.

For the song “Milkshake”, Kelis has said that it “means whatever people want it to” and that the milkshake referred to in the song is “the thing that makes women special […] what gives us our confidence and what makes us exciting”. It is rated less positive than “Mad World” at 5.24. That makes me want to doubt the authors’ commitment to Sparkle Motion.

Another upbeat jam that the kids listen to is the Ramones’ “Blitzkrieg Bop”. This is the energetic and exciting anthem of punk rock. It’s rated a negative 4.82. I wonder if we should even look at “Pinhead”.

Then there’s the old American folk classic “Where did you sleep last night”, which Nirvana performed a haunting version of on their album MTV Unplugged in New York. The song (also known as “In the Pines” and “Black Girl”) was first made famous by Lead Belly and it includes such catchy lines as

My girl, my girl, don’t lie to me
Tell me where did you sleep last night
In the pines, in the pines
Where the sun don’t ever shine
I would shiver the whole night through

And

Her husband was a hard working man
Just about a mile from here
His head was found in a driving wheel
But his body never was found

This song is rated a positive 5.24. I don’t know about you but neither the Lead Belly version, nor the Nirvana cover would give me that impression.

Even Pharrell Williams’ hit song “Happy” rates only 5.70. That’s a song so goddamn positive that it’s called “Happy”. But it’s only 0.03 points more positive than Eric Clapton’s “Tears in Heaven”, which is a song about the death of Clapton’s four-year-old son. Harry Chapin’s “Cat’s in the Cradle” was voted the fourth saddest song of all time by readers of Rolling Stone but it’s rated 5.55, while Willie Nelson’s “Always on My Mind” rates 5.63. So they are both sadder than “Happy”, but not by much. How many lyrics must a man research, before his corpus is questioned?

Corpus linguistics is not just gathering a bunch of words and calling it a day. The fact that the same “word” can have several meanings (known as polysemy), is a major feature of language. So before you ask people to rate a word’s positivity, you will want to make sure they at least know which meaning is being referred to. On top of that, words do not work in isolation. Spacing is an arbitrary construct in written language (remember that song lyrics are mostly heard not read). The back used in the Ramones’ lines “Piling in the back seat” and “Pulsating to the back beat” are not about a body part. The Weezer song “Butterfly” uses the word mason, but it’s part of the compound noun mason jar, not a reference to a brick layer. Words are also conditioned by the words around them. A word like eve may normally be considered positive as it brings to mind Christmas Eve and New Year’s Eve, but when used in a phrase like “the eve of destruction” our judgment of it is likely to change. In the corpus under discussion here, eat is rated 7.04, but that doesn’t consider what’s being eaten and so can not account for lines like “Eat your next door neighbor” (from “Eve of Destruction”).

We could go on and on like this. The point is that the authors of both of the papers didn’t do enough work with their data before drawing conclusions. And they didn’t consider that some of the language in their corpus is part of a multimodal genre where there are other things affecting the meaning of the language used (though technically no language use is devoid of context). Whether or not the lyrics of a song are “positive” or “negative”, the style of singing and the music that they are sung to will highly effect a person’s interpretation of the lyrics’ meaning and emotion. That’s just the way that music works.

This doesn’t mean that any of these songs are positive or negative based on their rating, it means that the system used by the authors of the two papers to rate the positivity or negativity of language seems to be flawed. I would have guessed that a rating system which took words out of context would be fundamentally flawed, but viewing the ratings of the songs in this post is a good way to visualize that. The fact that the two papers were published in reputable journals and picked up by reputable publications, such as the Atlantic and the New York Times, only adds insult to injury for the field of linguistics.

You can see a table of the songs I looked at for this post below and an spreadsheet with the ratings of the lyrics is here. I calculated the positivity ratings by averaging the scores for the word tokens in each song, rather than the types.

(By the way, Tupac is rated 4.76. It’s a good thing his attitude was fuck it ‘cause motherfuckers love it.)

Song Positivity score (1–9)
“Happy” by Pharrell Williams 5.70
“Tears in Heaven” by Eric Clapton 5.67
“You Were Always on My Mind” by Willie Nelson 5.63
“Cat’s in the Cradle” by Harry Chapin 5.55
“Closer” by NIN 5.46
“Mad World” by Gary Jules and Michael Andrews 5.39
“Somebody that I Used to Know” by Gotye feat. Kimbra 5.33
“Waitin’ for a Superman” by The Flaming Lips 5.28
“Milkshake” by Kelis 5.24
“Where Did You Sleep Last Night” by Nirvana 5.24
“Butterfly” by Weezer 5.23
“Eve of Destruction” by Barry McGuire 4.93
“Blitzkrieg Bop” by The Ramones 4.82

 

Footnotes

[1] Also, be aware that listening to these songs while watching their music videos has an effect on the way you interpret them. (Click here to go back up.)

References

Isabel M. Kloumann, Christopher M. Danforth, Kameron Decker Harris, Catherine A. Bliss, Peter Sheridan Dodds. 2012. “Positivity of the English Language”. PLoS ONE. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0029484

Dodds, Peter Sheridan, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, and Christopher M. Danforth. 2015. “Human language reveals a universal positivity bias”. PNAS 112:8. http://www.pnas.org/content/112/8/2389

Huxley, Martin. 1997. Nine Inch Nails. New York: St. Martin’s Griffin.

Read Full Post »

Peter Friederici, in a recent article in the Bulletin of the Atomic Scientists, reminds us that “the language used to characterize the climate problem is far more important than is generally recognized”. Mr. Friederici’s article links to a CBS piece which states things more bluntly:

If you’re trying to get someone to care about the way the environment is changing, you might want to refer to it as “global warming,” rather than “climate change,” according to a new study

The idea is that global warming sounds more dire than climate change. Global warming is more likely to inspire people to do something drastic or force their government to take major steps, but climate change requires only minor steps to solve. So tree-hugging liberals will want to use global warming to fire up their base, while the term climate change is more amenable to the conservative approach of letting the free market sort things out. This idea has been floating around for just over ten years. It was inspired by the American political pollster Frank Luntz. While consulting the Republican Party in 2002, Luntz wrote a memo to President George W. Bush’s staff which read in part:

It’s time for us to start talking about “climate change” instead of global warming […] “Climate change” is less frightening than “global warming.” […] While global warming has catastrophic connotations attached to it, climate change suggests a more controllable and less emotional challenge.

Similar ideas about the differences between these seemingly synonymous terms have been raised in other news outlets. The two articles above also report the results of the Yale Project on Climate Change Communication, which found that:

the term “global warming” is associated with greater public understanding, emotional engagement, and support for personal and national action than the term “climate change.” […] Our findings strongly suggest that the terms global warming and climate change are used differently and mean different things in the minds of many Americans.

The report also says that:

Americans are four times more likely to say they hear the term global warming in public discourse than climate change.

The crucial element missing from all of these news articles and reports is any actual data about how often these terms are used. So let’s see if we can find that out.

Easier said than done

There are a few things to think about before we get started with the data. First, although Luntz’s recommendations were informed by his discussions with voters, we don’t know if President Bush or the Republican party actually listened to him. Reporting that Republicans were advised to use climate change instead of global warming doesn’t mean that they actually did so. Perhaps the reason for this is that it seems Bush didn’t use either term. He didn’t use them in his debates with Democratic presidential candidate John Kerry and he only used the term global climate change once in both his 2007 and 2008 State of the Union addresses:

And these technologies will help us be better stewards of the environment, and they will help us to confront the serious challenge of global climate change. – George W. Bush, State of the Union 2007

The United States is committed to strengthening our energy security and confronting global climate change. – George W. Bush, State of the Union 2008

So it’s hard to report on something happening when it didn’t happen. Ironically, Kerry used global warming once in his debate in St. Louis and twice in Coral Gables, so maybe he also got Luntz’s memo?

The second thing to think about is that reporting that Americans claim they hear global warming more often that climate change doesn’t mean that they actually do. People are really bad at accurately reporting things like this. For example, before I present the data to you, I want you to ask yourself which term you think is more common on various American news outlets. Based on the information above, do you think Fox News uses global warming more often or climate change? How about NPR and MSNBC? We’ll see whether the numbers back you up in a bit.

Finally, I’m going to take my data from the Corpus of Contemporary American English (COCA), which is a 450 million word database of speech and writing that is “suitable for looking at current, ongoing changes in the language”. I wrote about why it is better to use corpora like COCA instead of the Google N-gram viewer here.

Crunching the numbers

Let’s first see how common each of these terms are. COCA allows us to split up our data into different genres depending on where the texts come from – Spoken, Fiction, Magazine, Academic, and Newspaper – so we can look at only the genres we are interested in. For the purposes of this blog post, I’m going to look at news texts, magazine texts and spoken language data. We could also look at academic genres, but that might be problematic since according to the CBS article “Scientists have largely started using the term climate change because it more accurately describes the myriad changes to the climate […] while global warming refers to a single phenomenon.” So academics are very particular in the terms they use (seriously, we write whole sections of our theses just to define our terms and we love doing it).

Climate change
SECTION ALL SPOKEN MAGAZINE NEWSPAPER
FREQ 3136 806 1510 820
PER MIL 6.77 8.43 15.8 8.94

 

Climate change
SECTION 1990-1994 1995-1999 2000-2004 2005-2009 2010-2012
FREQ 156 174 390 1541 883
PER MIL 1.5 1.68 3.79 15.1 17.01

Here we can see the raw count (FREQ) for climate change in the Spoken, Magazine, and Newspaper sections of COCA, as well as for the term in different time periods. This is basically the number of times that the term appears in each section. We also have the frequency per million words (PER MIL), which is a way of normalizing the various sections because they each have a different amount of total words. Looking at this more accurate stat, we can see that climate change is most common in the Magazine genre and that its usage (in all genres taken together) increases over time.

Global warming
SECTION ALL SPOKEN MAGAZINE NEWSPAPER
FREQ 4031 1063 1801 1147
PER MIL 8.68 11.12 18.85 12.51

 

Global Warming
SECTION 1990-1994 1995-1999 2000-2004 2005-2009 2010-2012
FREQ 519 375 763 1854 520
PER MIL 4.99 3.63 7.41 18.17 10.02

Here we have the same stats for global warming. They show that the term is more common in all of the genres and time periods, except for 2010–2012, when the normalized frequency drops down to 10.02. In the same time period, the frequency for climate change is 17.02. Conservatives are winning!

Not so fast, tiger. We still don’t know who is using these words. Remember that global warming only refers to one of the many changes happening to our planet. Maybe those in the media picked up on this and started using climate change where it was more appropriate. So let’s cut up the genres.

Didn’t you get the memo?

So President Bush didn’t use climate change or global warming. But perhaps this idea that the opposing sides of the debate should use different terms has filtered down to the talking heads on TV. If we remember the idea that people believe they hear global warming more often than climate change in public discourse, we can look at the Spoken section of the corpus to check this claim. Here is where you can check your guesses about which term is more common on various news outlets. Below are the frequencies for climate change in the different sections of the Spoken corpus.

Climate change
Spoken # PER MILLION # TOKENS # WORDS
FOX 19.51 123 6,302,918
NPR 18.45 321 17,399,724
PBS 12.1 80 6,612,202
CNN 5.37 111 20,656,861
NBC 4.41 28 6,348,632
MSNBC 3.68 3 814,156
CBS 3.41 44 12,887,290
ABC 3.29 51 15,514,463
Indep 0.23 1 4,343,343

So climate change occurs about 19 times per million words on Fox News and about 3 times per million words on MSNBC. #TOKENS refers to the actual number of times the term appears in each subsection, while # WORDS refers to how many words make up each subsection.

Here are the same stats for global warming:

Global warming
Spoken # PER MILLION # TOKENS # WORDS
FOX 36.33 229 6,302,918
MSNBC 31.93 26 814,156
NPR 17.82 310 17,399,724
PBS 13.16 87 6,612,202
CNN 8.37 173 20,656,861
ABC 6.96 108 15,514,463
Indep 6.22 27 4,343,343
CBS 4.03 52 12,887,290
NBC 3.15 20 6,348,632

Interestingly enough, Fox news tops both lists. What’s strange, though, is that we should have expected a conservative/Republican news site like Fox to use the climate change much more than global warming, but that is not the case (they really are fair and balanced!). NPR and PBS use the terms with almost equal frequency, while the commie pinkos over at MSNBC use global warming at a much higher rate than climate change (they’re coming for your guns too!).

Everybody chill

But hold on a second. What do these numbers really tell us? First, in terms of the spoken data in COCA, global warming really is more frequent. That doesn’t account for all of the language people hear every day, but it is representative of the public discourse they are likely to hear. Only NBC used climate change more often, and even then only barely.

While we can say that the issue of climate change or global warming seems to feature more prominently on Fox News compared to CBS or ABC, we don’t really have a way of saying how these terms are used on any channel.

For that we have to look at the concordances (the passages from the texts where our search terms appear). There we can see things like Fox News’s Sean Hannity saying:

Al Gore has a financial stake in spreading global warming hysteria…
 
Al Gore’s friends in the liberal media jumped on the global warming bandwagon…
 
And finally tonight, Al Gore’ s global warming manipulation isn’t just affecting food prices…

Could it be possible that Fox News uses global warming in its scare tactics and/or liberal bashing?

We can compare this with Hannity’s use of climate change:

the University of Alaska at Fairbanks used 50,000 stimulus dollars to send 11 students to Copenhagen for the failed climate change conference…
 
Jones findings have been used for years to bolster the U.N.’s findings on climate change….

But this is probably nitpicking and it misses the larger point. The words around global warming and climate change say more about their meaning than anything else. We know how Sean Hannity feels about climate change. He says so right here:

HANNITY: Carol, I love you. You’re a great liberal. You defend your side well. If it is hot, it is global warming. If it is cold, it is global warming. If it rains, it’s global warming. If it hails, it is global warming.
 
CAROLINE HELDMAN: Gingrich and Romney are both saying that climate change is happening, are you behind them on this one?
 
HANNITY: I disagree. I don’t think the science is conclusive. Now, I do believe man has an impact on the environment. I want clean air. I want clean water. I want to leave a good planet for our kids and grandkids. But I’m not going to buy lies that are perpetrated by people […] with a political agenda.

I can’t tell if that last line was tongue in cheek, but Hannity seems to opt for another message that was in Luntz’s memo and stress that the scientific jury is still out on global warming. This has also become a conservative talking point. Obviously, the science is firmly in favor of man-made climate change, but even if we replace climate change with global warming in any of the quotes from Sean Hannity, the meaning will not change. The same goes for any of the news outlets above because the difference between these two terms is not that vast. We can all think of two terms which roughly mean the same thing, but are not interchangable in the same way that climate change and global warming are also not. (To his credit, Frank Luntz realizes the complex nature of language and his advice to President Bush on how to talk about environmental issues was nuanced and erudite.)

The idea here is to make sure not to put the cart in front of the horse. Frank Luntz advised President Bush to start using climate change instead of global warming as one way to swing the environmental issue into the Republicans’ favor. This idea would presumably trickle down to other Republicans in the government and to members of the media sympathetic to Republican views. So the first step would be to look at whether the frequency of global warming rose above that of climate change or not. Judging from the data in COCA, I would say this is not what happened. Global warming was already more common than climate change before Luntz issued his memo to President Bush, and both terms were on the rise. Luntz’s advice could certainly have been a contributing factor to climate change’s gain in usage, but it is certainly not the only one. And global warming is still more common on major American news outlets.

I don’t doubt that the terms have a difference in meaning for many people. No matter how small, there is always some semantic difference between even the closest of synonyms. These differences in meanings are based on many different factors, such as the hearer’s education, social background, nationality, familiarity with the speaker, and the context of the situation. What this boils down to is that it doesn’t matter what we call global warming. Focusing on who uses what term misses the point, even if people have more emotional reactions to one term or the other. Climate change is happening and all that matters is that we do something about it.

In the next post, I’ll do a more in depth quantitative analysis of President Bush’s use of these terms. I’ll also look at the problems with reporting Google Search statistics in research on language, which was a method employed by the Yale Project on Climate Change Communication (the same project that studied people’s feelings about the terms).

Read Full Post »

Dan Zarrella, the “social media scientist” at HubSpot, has an infographic on his website called “How to: Get More Clicks on Twitter”. In it he analyzes 200,000 link-containing tweets to find out which ones had the highest clickthrough rates (CTRs), which is another way of saying which tweets got the most people to click on the link in the tweet. Now, you probably already know that infographics are not the best form of advice, but Mr. Zarrella did a bit of linguistic analysis and I want to point out where he went wrong so that you won’t be misled. It may sound like I’m picking on Mr. Zarrella, but I’m really not. He’s not a linguist, so any mistakes he made are simply due to the fact that he doesn’t know how to analyze language. And nor should he be expected to – he’s not a linguist.

But there’s the rub. Since analyzing the language of your tweets, your marketing, your copy, and your emails, is extremely important to know what language works better for you, it is extremely important that you do the analysis right. To use a bad analogy, I could tell you that teams wearing the color red have won six out of the last ten World Series, but that’s probably not information you want if you’re placing your bets in Vegas. You’d probably rather know who the players are, wouldn’t you?

Here’s a section of Mr. Zarrella’s infographic called “Use action words: more verbs, fewer nouns”:

Copyright Dan Zarrella

Copyright Dan Zarrella

That’s it? Just adverbs, verbs, nouns, and adjectives? That’s only four parts of speech. Your average linguistic analysis is going to be able to differentiate between at least 60 parts of speech. But there’s another reason why this analysis really tells us nothing. The word less is an adjective, adverb, noun, and preposition; run is a verb, noun, and adjective; and check, a word which Mr. Zarrella found to be correlated with higher CTRs, is a verb and a noun.

I don’t really know what to draw from his oversimplified picture. He says, “I found that tweets that contained more adverbs and verbs had higher CTRs than noun and adjective heavy tweets”. The image seems to show that tweets that “contained more adverbs” had 4% higher CTRs than noun heavy tweets and 5-6% higher CTRs than adjective heavy tweets. Tweets that “contained more verbs” seem to have slightly lower CTRs in comparison. But what does this mean? How did the tweets contain more adverbs? More adverbs than what? More than tweets which contained no adverbs? This doesn’t make any sense.

The thing is that it’s impossible to write a tweet that has more adverbs and verbs than adjectives and nouns. I mean that. Go ahead and try to write a complete sentence that has more verbs in it than nouns. You can’t do it because that’s not how language works. You just can’t have more verbs than nouns in a sentence (with the exception of some one- and two-word-phrases). In any type of writing – academic articles, fiction novels, whatever – about 37% of the words are going to be nouns (Hudson 1994). Some percentage (about 5-10%) of the words you say and write are going to be adjectives and adverbs. Think about it. If you try to remove adjectives from your language, you will sound like a Martian. You will also not be able to tell people how many more clickthroughs you’re getting from Twitter or the color of all the money you’re making.

I know it’s easy to think of Twitter as one entity, but we all know it’s not. Twitter is made up of all kinds of people, who tweet about all kinds of things. While anyone is able to follow anyone else, people of similar backgrounds and/or professions tend to group together. Take a look at the people you follow and the people who follow you. How many of them do you know on personally and how many are in a similar business as you? These people probably make up the majority of your Twitter world. So what we need to know from Mr. Zarrella is which Twitter accounts he analyzed. Who are these people? Are they on Twitter for professional or personal reasons? What were they tweeting about and where did the links in their tweets go – to news stories or to dancing cat videos? And who are their followers (the people who clicked on the links)? This is essential information to put the analysis of language in context.

Finally, What Mr. Zarrella’s analysis should be telling us is which kinds of verbs and adverbs equal higher CTRs. As I mentioned in a previous post, marketers would presumably favor some verbs over others. They want to say that their product “produces results” and not that it “produced results”. What we need is a type of analysis can tell shit (noun and verb) from Shinola (just a noun). And this is what I can do – it’s what I invented Econolinguistics for. Marketers need to be able to empirically study the language that they are using, whether it be in their blog posts, their tweets, or their copy. That’s what Econolinguistics can do. With my analysis, you can forget about meaningless phrases like “use action words”. Econolinguistics will allow you to rely on a comprehensive linguistic analysis of your copy to know what works with your audience. If this sounds interesting, get in touch and let’s do some real language analysis (joseph.mcveigh (at) gmail.com).

 

Other posts on marketing and linguistics

How Linguistics can Improve your Marketing by Joe McVeigh

Adjectives just can’t get a break by Joe McVeigh

Read Full Post »

Everyone loves verbs, or so you would be led to believe by writing guides. Zack Rutherford, a professional freelance copywriter, posted an article on .eduGuru about how to write better marketing copy. In it he says:

Verbs work better than adjectives. A product can be quick, easy, and powerful. But it’s a bit more impressive if the product speeds through tasks, relieves stress, and produces results. Adjectives describe, while verbs do. People want a product or service that does. So make sure you provide them with one. [Emphasis his – JM]

If you’re a copy writer or marketer, chances are that you’ve heard this piece of advice. It sort of makes sense, right? Well as a linguist who studies marketing (and a former copy writer who was given this advice), I want to explain to you why it is misleading at best and flat out wrong at worst. These days it is very easy to check whether verbs actually work better than adjectives in copy. You simply take many pieces of copy (texts) and use computer programs to tag each word for the part of speech it is. Then you can see whether the better, i.e. more successful, pieces of copy use more verbs than adjectives. This type of analysis is what I’m writing my PhD on (marketers and copy writers, you should get in touch).

Don’t heed your own advice

So being the corpus linguist that I am, I decided to check whether Mr. Rutherford follows his own advice. His article has the following frequencies of usage for nouns, verbs, adjectives, and adverbs:

Nouns Verbs Adjectives Adverbs Word count
Total 275 208 135 90 1195
% of all words 23.01% 17.41% 11.30% 7.53%

Hooray! He uses more verbs than adjectives. The only thing is that those frequencies don’t tell the whole story. They would if all verbs are equal, but those of us who study language know that some verbs are more equal than others. Look at Mr. Rutherford’s advice again. He singles out the verbs speeds through, relieves, and produces as being better than the adjectives quick, easy, and powerful. Disregarding the fact that the first verb in there is a phrasal verb, what his examples have in common is that the verbs are all -s forms of lexical verbs (gives, takes, etc.) and the adjectives are all general adjectives (according to CLAWS, the part-of-speech tagger I used). This is important because a good copy writer would obviously want to say that their product produces results and not that it produced results. Or as Mr. Rutherford says “People want a product or service that does” and not presumably one that did. So what do the numbers look like if we compare his use of -s form lexical verbs to general adjectives?

-s form of lexical verbs General adjectives
Total 24 135
% of all words 2.01% 11.30%

Uh oh. Things aren’t looking so good. Those frequencies exclude all forms of the verbs BE, HAVE, and DO, as well as modals and past tense verbs. So maybe this is being a bit unfair. What would happen if we included the base forms of lexical verbs (relieve, produce), the -ing participles (relieving, producing) and verbs in the infinitive (to relieve, it will produce)? The idea is that there would be positive ways for marketers to write their copy using these forms of the verbs. Here are the frequencies:

Verbs (base, -ing part.,
Infin., and -s forms)
General adjectives
Total 127 135
% of all words 10.63% 11.30%

Again, things don’t look so good. The verbs are still less frequent than the general adjectives. So is there something to writing good copy other than just “use verbs instead of adjectives”? I thought you’d never ask.

Some good advice on copy writing

I wrote this post because the empirical research of marketing copy is exactly what I study. I call it Econolinguistics. Using this type of analysis, I have found that using more verbs or more adjectives does not relate to selling more products. Take a look at these numbers.

Copy text Performance Verbs – Adjectives
1 42.04 3.94%
2 11.82 0.63%
3 11.81 6.22%
4 10.75 -0.40%
5 2.39 3.21%
6 2.23 -0.78%
7 2.23 4.01%
8 1.88 1.14%
9 5.46%

These are the frequencies of verbs and adjectives in marketing texts ordered by how well they performed. The ninth text is the worst and the rest are ranked based on how much better they performed than this ninth text. The third column shows the difference between the verb frequency and adjective frequency for each text (verb % minus adjective %). If it looks like a mess, that’s because it is. There is not much to say about using more verbs than adjectives in your copy. You shouldn’t worry about it.

There is, however, something to say about the combination of nouns, verbs, adjectives, adverbs, prepositions, pronouns, etc., etc. in your copy. The ways that these kinds of words come together (and the frequencies at which they are used) will spell success or failure for your copy. Trust me. It’s what Econolinguistics was invented for. If you want to know more, I suggest you get in touch with me, especially if you’d like to check your copy before you send it out (email: joseph.mcveigh(at)gmail.com).

In order to really drive the point home, think about this: if you couldn’t use adjectives to describe your product, how would you tell people what color it is? Or how big it is? Or how long it lasts? You need adjectives. Don’t give up on them. They really do matter. And so do all the other words.

 

Other posts on marketing and linguistics

How Linguistics can Improve your Marketing by Joe McVeigh

Read Full Post »

This post is a response to a corpus search done on another blog. Over on What You’re Doing Is Rather Desperate, Neil Saunders wanted to research how adverbs are used in academic articles, specifically the sentence adverb, or as he says, adverbs which are used “with a comma to make a point at the start of a sentence”. I’m not trying to pick on Mr. Saunders (because what he did was pretty great for a non-linguist), but I think his post, and the media reports on it, makes a great excuse to write about the really, really awesome corpus linguistics resources available to the public. I’ll go through what Mr. Saunders did, and list what he could have done had he known about corpus linguistics.

Mr. Saunders wanted to know about sentence adverbs in academic texts so he wrote a script to download abstracts from PubMed Central. Right off the bat, he could have gone looking for either (1) articles on sentence adverbs or (2) already available corpora. As I pointed out in a comment on his post (which has mysteriously disappeared, probably due to the URLs I in it), there are corpora with science texts from as far back as the 1375 AD. There are also modern alternatives, such as the Corpus of Contemporary American English (COCA) and the British National Corpus (BNC), both of which (and much, much more) are available through Mark Davies’ awesome site.

I bring this up because there are several benefits of using these corpora instead of compiling your own, especially if you’re not a linguist. The first is time and space. Saunders says that his uncompressed corpus of abstracts is 47 GB (!) and that it took “overnight” (double !) for his script to comb through the abstracts. Using an online corpus drops the space required on your home machine down to 0 GB. And running searches on COCA, which contains 450 million words, takes a matter of seconds.

The second benefit is a pretty major one for linguists. After noting that his search only looks for words ending in -ly, Saunders says:

There will of course be false positives – words ending with “ly,” that are not adverbs. Some of these include: the month of July, the country of Italy, surnames such as Whitely, medical conditions such as renomegaly and typographical errors such as “Findingsinitially“. These examples are uncommon and I just ignore them where they occur.

This is a big deal. First of all, the idea of using “ly” as a way to search for adverbs is profoundly misguided. Saunders seems to realize this, since he notes that not all words that end in -ly are adverbs. But where he really goes wrong, as we’ll soon see, is in disregarding all of the adverbs that do not end in -ly. If Saunders had used a corpus that already had each word tagged for its part of speech (POS), or if he had ran a POS-tagger on his own corpus, he could have had an accurate measurement of the use of adverbs in academic articles. This is because POS-tagging allows researchers to find adverbs, adjectives, nouns, etc., as well as searching for words that end in -ly – or even just adverbs that end in -ly. And remember, it can all be done in a matter of moments (even the POS tagging). You won’t even have time to make a cup of coffee, although consumption of caffeinated beverages is highly recommended when doing linguistics (unless you’re at a conference, in which case you should substitute alcohol for caffeine).

Here is where I break from following Saunders’ method. I want like to show you what’s possible with some of the publicly available corpora online, or how a linguist would conduct an inquiry into the use of adverbs in academia.

Looking for sentence-initial adverbs in academic texts, I went to COCA. I know the COCA interface can seem a bit daunting to the uninitiated, but there are very clear instructions (with examples) of how to do everything. Just remember: if confusion persists for more than four hours, consult your local linguist.

On the COCA page, I searched for adverbs coming after a period, or sentence initial adverbs, in the Medical and Science/Technology texts in the Academic section (Click here to rerun my exact search on COCA. Just hit “Search” on the left when you get there). Here’s what I came up with:

Click to embiggen

Top ten sentence initial adverbs in medical and science academic texts in COCA.

You’ll notice that only one of the adverbs on this list (“finally”) ends in “ly”. That word is also coincidentally the top word on Saunders’ list. Notice also that the list above includes the kind of sentence adverbs that Saunders’ search deliberately does not, or those not ending in -ly, such as “for” and “in”, despite the examples of such given on the Wikipedia page that Saunders linked to in his post. (For those wondering, the POS-tagger treated these as parts of adverbial phrases, hence the “REX21” and “RR21” tags)

Searching for only those sentence initial adverbs that end in -ly, we find a list similar to Saunders’, but with only five of the same words on it. (Saunders’ top ten are: finally, additionally, interestingly, recently, importantly, similarly, surprisingly, specifically, conversely, consequentially)

Click to embiggen

Top ten sentence initial adverbs ending in -ly in medical and science academic texts in COCA.

So what does this tell us? Well, for starters, my shooting-from-the-hip research is insufficient to draw any great conclusions from, even if it is more systematic than Saunders’. Seeing what adverbs are used to start sentences doesn’t really tell us much about, for example, what the journals, authors, or results of the papers are like. This is the mistake that Mr. Saunders makes in his conclusions. After ranking the usage frequencies of surprising by journal, he writes:

The message seems clear: go with a Nature or specialist PLoS journal if your results are surprising.

Unfortunately for Mr. Saunders, a linguist would find the message anything but clear. For starters, the realtive use of surprising in a journal does not tell us that the results in the articles are actually surprising, but rather that the authors wish to present their results as surprising. That is, if the word surprising in the articles is not preceded by Our results are not. This is another problem with Mr. Saunders’ conclusions – not placing his results in context – and it is something that linguists would research, perhaps by scrolling through the concordances using corpus linguistics software, or software designed exactly for the type of research that Mr. Saunders wished to do.

The second thing to notice about my results is that they probably look a whole lot more boring than Saunders’. Such is the nature of researching things that people think matter (like those nasty little adverbs), but professionals know really don’t. So it goes.

Finally, what we really should be looking at is how scientists use adverbs in comparison to other writers. I chose to contrast the frequencies of sentence-initial adverbs in the medical and science/technology articles with the frequencies found in academic articles from the (oft-disparaged) humanities. (Here is the link to that search.)

Click to embiggen

Top ten sentence initial adverbs in humanities academic texts in COCA.

Six of the top ten sentence initial adverbs in the humanities texts are also on the list for the (hard) science texts. What does this tell us? Again, not much. But we can get an idea that either the styles in the two subjects are not that different, or that sentence initial adverbs might be similar across other genres as well (since the words on these lists look rather pedestrian). We won’t know, of course, until we do more research. And if you really want to know, I suggest you do some corpus searches of your own because the end of this blog post is long overdue.

I also think I’ve picked on Mr. Saunders enough. After all, it’s not really his fault if he didn’t do as I have suggested. How was he supposed to know all these corpora are available? He’s a bioinformatician, not a corpus linguist. And yet, sadly, he’s the one who gets written up in the Smithsonian’s blog, even though linguists have been publishing about these matters since at least the late 1980s.

Before I end, though, I want to offer a word of warning. Although I said that anyone who knows where to look can and should do their own corpus linguistic research, and although I tried to keep my searches as simple as possible, I couldn’t have done them without my background in linguistics. Doing linguistic research on Big Data is tempting. But doing linguistic research on a corpora, especially one that you compiled, can be misleading at best and flat out wrong at worst if you don’t know what you’re doing. The problem is that Mr. Saunders isn’t alone. I’ve seen other non-linguists try this type of research. My message here is similar to the one in my previous post, which was directed to marketers: linguistic research is interesting and it can tell you a lot about the subject of your interest, but only if you do it right. So get a linguist to do it or see if a linguist has already done it. If either of these is not possible, then feel free to do your own research, but tread lightly, young padawans.

If you’re wondering whether academia overuses adverbs (hint: it doesn’t) or just how much adverbs get tossed into academic articles, I recommend reading papers written by Douglas Biber and/or Susan Conrad. They have published extensively on the linguistic nature of many different writing genres. Here’s a link to a Google Scholar search to get you started. You can also have a look at the Longman Grammar, which is probably available at your library.

Read Full Post »

Older Posts »

%d bloggers like this: