Book Review: The Language Instinct Debate by Geoffrey Sampson

The following is a book review and the second post in a series. The first post discussed Steven Pinker’s The Language Instinct . This post discusses Geoffrey Sampson’s The Language Instinct Debate, which is a critique of Pinker’s book. The third post will discuss some of the critics and reviews of Sampson’s book.

In a comment on the first post in this series, linguischtick (who has an awesome gravatar, by the way) pointed out that I didn’t mention two key points of the Chomskers (Chomsky + Pinker + their followers. Nom.) theory. As this post is about a book which is a direct “response to Steven Pinker’s The Language Instinct and Noam Chomsky’s nativism,” it would be good to remind ourselves of the claims that nativists make. Below are the claims along with some comments on them.

1. Speed of acquisition

Chomskyian linguists claim that kids learn language remarkably fast, so fast that it must be innate. But fast compared to what? How do we know kids don’t learn language very slowly? Chomskers has no answer. Sampson says this and then very cleverly points out that Chomsky has never supplied an amount of time it should take kids to learn language because “he argues that the data available to a language learner are so poor that accurate language learning would be impossible without innate knowledge – that is, no amount of time would suffice” (37, emphasis his).

2. Age dependence

Chomskers claim that the language instinct theory is supported by how our ability to learn a language diminishes greatly around puberty. Sampson quickly refutes this claim by showing how the evidence on which Chomskers based his claim fails “to distinguish language learning from any other case of learning” and that it is “perfectly compatible with the view that learning as a general process is for biological reasons far more rapid before puberty than later.” (41, emphasis his) So we see that leap of faith again. The evidence doesn’t suggest a language instinct, but that doesn’t stop Chomskers from jumping to that conclusion.

3. Poverty of the Stimulus

This is a major part of the Chomskers argument (and the only one that can be shortened into a perfectly applicable acronym – POS). Put simply, it goes like this: kids are not supplied with enough language info by their community to enable them to learn to speak. This is what Pinker was talking about when he snidely called Motherese – the style adults use when speaking to children – “folklore”. The poverty of the stimulus is a crazy idea, but don’t worry, it’s completely wrong. First, once linguists started researching Motherese, they found that it was much more “proper” than anyone had assumed. Sampson references one study that found “only one utterance out of 1500 spoken to the children was a disfluency.” (43) Chomskers also claim that some linguistic features never occur in spoken language and yet children learn the rules for them anyway. But wait a minute, has Chomskers ever looked for these mysterious linguistic features that never occur? Of course not. That’s not how they roll.

Sampson gives them a taste of their own medicine by writing

‘Hang on a minute,’ I hear the reader say. ‘You seem to be telling us that this man [Chomsky] who is by common consent the world’s leading living intellectual, according to Cambridge University a second Plato, is basing his radical reassessment of human nature largely on the claim that a certain thing never happens; he tells us that it strains his credulity to think that this might happen, but he has never looked, and people who have looked find that it happens a lot.’
Yes, that’s about the size of it. Funny old world, isn’t it! (47)

Another aspect of this piece of shit poverty of the stimulus argument is the so-called lack of negative evidence. This idea claims that kids aren’t given evidence of which types of constructions are not possible in language. It leads one to wonder how children could possibly learn which sentences to exclude as non-language? Sounds pretty interesting, huh? There must be a language instinct then, right? Sampson bursts Chomskers bubble:

The trouble with this argument is that, if it worked, it would not just show that language learning without innate knowledge is impossible: it would show that scientific discovery is impossible. We can argue about whether or not children get negative evidence from their elders’ language; but a scientist certainly gets no negative evidence from the natural world. When a heavy body is released near the surface of the Earth, it never remains stationary or floats upwards, displaying an asterisk or broadcasting a message ‘This is not how Nature works – devise a theory which excludes this possibility!’ (90)

4. Convergence of grammars

This claim wonders how both smart and dumb people grow up speaking essentially the same language.
Except they don’t, so forget it. Other linguists – the kind that like evidence and observable data – have proven that people don’t speak the same.

5. Language universals

This is the idea that there are some structural properties which are found across every language in the world, even though there is no reason why they should be (since they’re not necessary to language). This is where Universal Grammar comes in. Sampson devotes a chapter to this broad argument and in one of the many parts that make this book an excellent read, he very cleverly takes the argument down by pointing out that universals are better evidence of the cultural development of language than they are of the biological innate theory of language. Using a theory developed by Herbert Simon, Sampson shows that, basically, the structural dependencies that Chomskers is so fond of arose out of normal evolutionary development because evolution favors hierarchical structure. Complex evolutionary systems – something Sampson argues language is – are hierarchically structured for a reason, they do not have to be innate.

If this is the crux of the language instinct argument, it’s almost laughable how easily it falls. As Sampson notes, even Chomskers doesn’t think it carries weight.

Steven Pinker himself has suggested that nativist arguments do not amount to much. In a posting on the electronic LINGUIST List (posting 9.1209, 1 September 1998), he wrote: ‘I agree that U[niversal G[rammar] has been poorly defended and documented in the linguistics literature.’ Yet that literature comprises the only grounds we are given for believing in the language universals theory. If the theory is more a matter of faith than evidence and reasoned argument even for its best-known advocate, why should anyone take it seriously? If it were not that students have to deal with this stuff in order to get their degrees, how many takers would there be for it? (166)

Even a blind squirrel finds a nut sometimes

The really sad thing is that Universal Grammar is the crux of the Chomskers argument. Sampson writes that “at heart linguistic nativism is a theory about grammatical structure.” (71) More importantly, it’s a theory that gathers all the “evidence” it thinks support its beliefs and dismisses any that do not. It is Confirmation Bias 101.

But don’t take my word for it. Just before he knocks down the innatist belief that tree structures prove there’s a language instinct, Sampson points out that Chomskers don’t even know how to follow through with their own thoughts. He writes

Ironically, though, having been the first to realize that tree structure in human grammar is a universal feature that is telling us something about how human beings universally function, Chomsky failed to grasp what it is telling us. The universality of tree structuring tells us that languages are systems which human beings develop in the their gradual, guess-and-test style by which, according to Karl Popper, all knowledge is brought into being. Tree structuring is the hallmark of gradual evolution. (141)

Hey-o!

So don’t violate or you’ll get violated

OK, right now the reader might think I’ve been too hard on Chomskers. Let me assuage your concerns. I’m a firm believer in treating people with the respect they deserve. So when I say that Chomskers have their heads stuck firmly up their own asses, it’s because saying “the facts don’t support their claims” is not what they deserve. A group of scientists that hates facts deserves derision. Researchers in every field use observable data to come to conclusions. Their publications are part of an ongoing debate among other researchers, who can support or refute their claims based on more data. Everyone plays by these rules because they are in everyone’s best interest. All infamous academic quarrels aside, Chomskers would prefer not to back up their claims with observable data nor engage in any kind of debate with scientists. The bum on the street shouting that the world is going to end has the advantage of being bat-shit crazy. What’s Chomskers excuse?

I suppose they could say that they are well-established. But in my mind that just points out the reasons for their unscientific actions. What’s going to happen to those grants and faculty positions if people stop believing in Chomskers’ witchcraft? Sampson writes

“Nativist linguistics is now the basis of so many careers and so many university departments that it feels itself entitled to a degree of reverence. Someone who disagrees is expected to pull his punches, to couch his dissent in circumspect and opaquely academic terms – and of course, provided he does that, the nativist community is adept at verbally glossing over the critique in such a way that, for the general reader, not a ripple is left disturbing the public face of nativism. But reverence is out of place in science. The more widespread and influential a false theory has become, the more urgent it is to puncture its pretensions. Taxpayers who maintain the expensive establishment of nativist linguistics do not understand themselves to be paying for shrines of a cult: they suppose that they are supporting research based on objective data and logical argument.” (129)

Chomskers have been selling you snake oil for 60 years, they can’t give it up now. They have to double-down. Now’s the time to really push the limits of decency in academia. Take a look:

“Paul Postal discusses in his Foreword the fact that my critique of linguistic nativism has been left unanswered by advocates of the theory. I am not alone there: various stories go the rounds about refusals by leading figures of the movement to engage with their intellectual opponents in the normal academic fashion, for fear that giving the oxygen of publicity to people who reject nativist theory might encourage the public to read those people and find themselves agreeing. […] This interesting point here is a different one. Nowhere in Words and Rules does Pinker say that he is responding to my objection. My book introduced the particular examples of Blackfoot and pinkfoot into this debate, and they are such unusual words that Pinker’s use of the same examples cannot be coincidence. He is replying to my book; but he does not mention me.” (127-8)

I don’t think I need to point out the shamefulness of such actions.

I read Steven Pinker and all I got was this lowsy blog post

Reading Sampson after reading Pinker is a lesson in frustration, but not because of any problems with Sampson’s book. On the contrary, The Language Instinct Debate is very well written. Sampson not only clearly points out why Chomsky and Pinker’s theories are wrong, but he does so in a seemingly effortless way. Sometimes this is obvious because Chomskers didn’t even look at the evidence, they just made something up and held out their hands. Sometimes this is frustrating because I wasted time reading Pinker’s 450-page sand castle that Sampson crumbled in less than half of that. The Language Instinct Debate may leave you wondering how you ever thought Chomskers was on to something when Sampson makes the counter-evidence seems so blatantly obvious.

In the next and final post of this series, I’ll talk about some of the reviews and critics of Sampson’s book. For now, I’ll leave you with how Chomskers’ refusal to check the evidence or believe anyone who has, along with their outstretched hand and their demand that you believe them, has inspired me to write a book of my own. It’s called Paris is the Capital of Germany, China is in South America, and Other Reasons Why I Hate Maps.

It’s due out at the end of never because ugh.

 

 

References

Sampson, Geoffrey. 2005. The Language Instinct Debate. London & New York: Continuum.

Book Review: The Language Instinct by Steven Pinker

The following is a book review and the first post in a series. This post discusses Steven Pinker’s The Language Instinct. The second post discusses Geoffrey Sampson’s The Language Instinct Debate, which is a critique of Pinker’s book. The third post will discuss some of the critics and reviews of Sampson’s book.

In order to talk about Steven Pinker and linguistics, I first have to explain a bit about Noam Chomsky and linguistics. Chomsky started writing about linguistics in the 1950s and through sheer force became a major player in the field. This did not, however, mean that any of Chomsky’s theories carried weight. On the contrary, they were highly speculative and devoid of empirical evidence. Chomsky is the armchair linguist extraordinaire. The audacity of his theory, however, was that it proposed humans are born with something called Universal Grammar, an innate genetic trait that interprets the common underlying structure of all languages and allows us to effortlessly learn our first language. Extraordinary claims require extraordinary evidence, but it’s been over 50 years and the evidence has never come. On top of that, the linguist John McWhorter (who partly inspired this series of posts) has said that “There is an extent to which any scientific movement is partly a religion and that is definitely true of the Chomskyans.” As we’ll see, the analogy runs much deeper than that.

What you need to know for this review is that Steven Pinker is a Chomskyan. Therefore, this post will discuss not only The Language Instinct, but also the general theories behind it, since Pinker’s book is at the forefront of carrying on the (misguided) notions of Chomskyan linguistics. It’s not going to be pretty, but trust me, I know what I’m doing. To make things a bit easier on us all, instead of referring to Chomsky and Pinker and their cult followers separately, I’m going to call them Chomskers. (LOLcat says “meow”?).

Steven Pinker has got a bridge to sell you

On page 18, Pinker contrasts an innate origin of language with a cultural origin to define what he means by a language “instinct”:

Language is not a cultural artifact that we learn the way we learn to tell time or how the federal government works. Instead, it is a distinct piece of biological makeup of our brains. Language is a complex, specialized skill, which develops in the child spontaneously without conscious effort or formal instruction, is deployed without awareness of its underlying logic, is qualitatively the same in every individual, and is distinct from more general abilities to process information or behave intelligently. For these reasons some cognitive scientists have described language as a psychological faculty, a mental organ, a neural system, and a computational module. But I prefer the admittedly quaint term ‘instinct.’ It conveys the idea that people know how to talk in more or less the sense that spiders know how to spin webs.

It’s possible to deconstruct the incongruities of that passage, but that’s a job for another post (specifically, the one right after this, Sampson’s critique of Pinker). For now, just replace “language” in that passage with “making a sandwich” because to most linguists, the idea that our ability to make a sandwich is a “distinct piece of biological makeup of our brains” makes just as much sense as Pinker’s notion about language. So… Great argument, let’s eat!

Instead of focusing on the logical arguments that refute Pinker’s theory, what I want to discuss here is the frustration that comes from reading The Language Instinct and Chomskers literature when you know there are other more tenable theories out there.

Don’t drink the Kool-Aid

The first problem has to do with what I’ll call the Chomskers’ Leap of Faith. This involves the theory that there is an underlying structure common to all languages and that its form and reasoning is innate to the human brain. It is called Universal Grammar. In a sense, our brains give us a basic language structure that we can then extrapolate to our mother tongue, whatever that may be. To Chomskers, that is how people learn how to speak so quickly – they already have the fundamental tool, or language instinct, needed to develop language.

How did Chomskers arrive at such a theory, you ask? Simple, they made it up. Universal Grammar was conjured out of thin air (i.e. Chomsky’s mind) and after five decades there is still no solid evidence of its existence. This is the leap of faith I’m talking about. A good example of it comes from two bullet points on page 409:

  • Under the microscope, the babel of languages no longer appear to vary in arbitrary ways and without limit. One now sees a common design to the machinery underlying the world’s languages, a Universal Grammar.
  • Unless this basic design is built in to the mechanism that learns a particular grammar, learning would be impossible. There are many possible ways of generalizing from parents’ speech to the language as a whole, and children home in on the right ones, fast.

These ideas are completely speculative (also known as – “pure bullshit”), but they illustrate Pinker’s leap of faith and circular logic. He thinks that because kids speak, they must have Universal Grammar and because they have Universal Grammar, they must speak. Chomskers love circular logic. It’s what their temple is built on. Pinker’s The Language Instinct is 450 pages of that kind of reasoning. Nothing in the 400 pages leading up to those bullets requires a belief in Universal Grammar. It’s just cherry-picked, misleading, or outright refuted studies.

And the Lord said unto Chomskers…

Another infuriating aspect of reading Chomskers is the pretentiousness of their prose. One gets the feeling that they are reading the Word of God (Noam Chomsky, to the Chomskers) sent down from on high. Instead of taking other theories into account, or even trying to prove why other theories are wrong, they simply dismiss them presumptuously. And they lead unsuspecting readers to do the same. Take this quote from page 39:

First, let’s do away with the folklore that parents teach their children language. No one supposes that parents provide explicit grammar lessons, of course, but many parents (and some child psychologists who should know better) think that mothers provide children with implicit lessons […] called Motherese.

Calling “Motherese” – which is a seriously studied and empirically proven phenomenon – “folklore” doesn’t make it so. Why Pinker would do such a thing seems strange at first, but you have to realize that that’s what Chomskers do. That is how they deal with other solid linguistic studies that have the possibility of refuting their claims (which, remember, have no empirical evidence). The attitude of contempt didn’t work for Noam Chomsky and it’s not going to work for Steven Pinker.

So why does he do it? As the linguist Pieter A. Seuren wrote in Western Linguistics: An Historical Inroduction:

Frequently one finds [Chomsky] use the term ‘exotic’ when referring to proposals or theories that he wishes to reject, whereas anything proposed by himself or his followers is ‘natural’ or ‘standard’. […]
One further, particularly striking feature of the Chomsky school must be mentioned in this context, the curious habit of referring to and quoting only members of the same school, ignoring all other linguists except when they have been long dead. The fact that the Chomsky school forms a close and entirely inward looking citation community has made some authors compare it to a religious sect or, less damningly, a village parish. No doubt there is a point to this kind of comparison, but one should realize that political considerations probably play a larger part in Chomskyan linguistics than is customary in either sects or village parishes. (525)

The problem again lays in Chomskers’ impression that only their theory exists. The bored, novice, or uncritical reader – and, you know, anyone being tested on this book – is liable to take Pinker at face value. In Chapter 8, aptly titled “The Tower of Babel,” Pinker really lays on the God-given truth of Universal Grammar. He writes

What is most striking of all is that we can look at a randomly picked language and find things that can sensibly be called subjects, objects, and verbs to begin with. After all, if we were asked to look for the order of subject, object, and verb in musical notation, or in the computer programming language FORTRAN, or in Morse code, or in arithmetic, we would protest that the very idea is nonsensical. It would be like assembling a representative collection of the world’s cultures from the six continents and trying to survey the colors of their hockey team jerseys or the form of their harakiri rituals. We should be impressed, first and foremost, that research on universals of grammar is even possible!

Except we shouldn’t. Chomskers have been pulling their “theories” out of your collective asses for decades now. Why would anyone be impressed that “research” on something they made up is “possible?” Are you impressed with people in tin foil hats researching UFO landings? That’s not to mention the fact that we invented the concepts of “subject” and “verb” to apply to language, just like we invented “base 10” and “base 60” to apply to arithmetic. Looking for those in language would be nonsensical. But looking for something that could sensibly be called a base in any randomly picked counting system would be – shock! awe! – possible and completely unimpressive. Pinker does a disservice to the reader by equating the existence of something like nouns in all of the world’s languages to the “existence” of Universal Grammar. There is evidence for one, not the other. The Bible tells us that the world was created. That is a fact. The Bible also tells us that God created the world. That is a statement of belief.

In a footnote, Seuren quotes Pinker’s admiration for Chomsky and then says “It seems that Pinker forgot to take into account the possibility that there may also be valid professional reasons for uttering severe criticisms vis-à-vis Chomsky.” (526) In the same way that a Catholic priest is unlikely to quote from the Koran in his sermon, Chomskers will not address any other theories in their writing. That’s alright for a parish, it’s not alright for academia.

At this point you may be wondering how the Chomskers’ theories have survived for so long. It has to do with their outlandishness and their unwillingness to engage with critics. As Seuren notes, “And since no other school of linguistics would be prepared to venture into areas of theorizing so far removed from verifiable facts and possible falsification, the Chomskyan proposals could be made to appear unchallenged.” (284) By the time other linguists took note of what the Chomskers were up to, it was too late. They had already established their old boys club. What’s interesting is that linguists need not bother trying to tear down the Chomskers, since books like The Language Instinct demonstrate that the closer Chomskers try to bring their theory to verifiable facts, the more they falsify it. I don’t know if Pinker realized this, but writing about shit as if it were Shinola has never been a problem for Chomskers. In a subsection titled No arguments were produced, just rhetoric Seuren writes,

Despite twenty-odd years of disparagement from the side of Chomsky and his followers, one has to face the astonishing fact that not a single actual argument was produced during that period to support the attitude of dismissal and even contempt that one finds expressed, as a matter of routine, in the relevant Chomsky-inspired literature. Quasi-arguments, on the contrary, abounded. (514)

Linguistics does not work that way. Good night!

I told you the religion analogy was going to be more appropriate than it seemed at first. Belief in Universal Grammar is very much like belief in a god – you can’t see it, but it’s there. But that’s not science! To some people, the sunrise is proof that god exists. To astronomers, the sun does not actually “rise”. To Chomskers, speech is proof that Universal Gammar exists. To linguists, speech does not require such a leap of faith.

With his hawkish proclamations of the existence of Universal Grammar and his complete dismissal of any criticism, Noam Chomsky has done more harm than good to linguistics. Seuren says that “this behavior on Chomsky’s part has caused great harm to linguistics. Largely as a result of Chomsky’s actions, linguistics is now sociologically in a very unhealthy state. It has, moreover, lost most of the prestige and appeal it commanded forty years ago.” (526)

In an ironic turn of events considering his liberal political leanings, Chomsky and his ilk have become the Fox News of linguistics – they pull their theories out of thin air, shout them at the top of their lungs, and ridicule any who say otherwise. And just like the scare tactics of Fox News, the idea of a language instinct sells. McWhorter quite politely explains the Chomskers’ zealotry by saying “they want to find [a language instinct], they’re stimulated by this idea – as far as the counter evidence, most of them are too busy writing grants to pay much attention.” But that’s being too kind. If you ask me, bullshitting is their business… and business is good.

All this is unfortunate

To sum up, is there a language instinct? Maybe. Does Steven Pinker present a valid case for a language instinct? No.

To return to our religious analogy, you can believe in the Christian god, or in Buddha, or in the Flying Spaghetti Monster and there’s nothing wrong with that. But you can’t prove any of these gods exist (apologies to the Pastafarians, who have presented some very compelling evidence). Neither can Chomskers prove that a language instinct exists. I suppose there’s nothing wrong with believing it does, but you better have some facts to back up your theory if you want others to follow. Smoke and mirrors are interesting when used in magic shows, but infuriating when used in academic prose.

With a sly patronizing of those who cannot put up with Chomsky’s dense prose and a crafty acknowledgement of Chomsky’s intellectual superiority, Pinker writes

And who can blame the grammarphobe, when a typical passage from one of Chomsky’s technical works reads as follows? […quotes some mumbo jumbo from Chomksy…] All this is unfortunate […] Chomsky’s theory […] is a set of discoveries about the design of language that can be appreciated intuitively if one first understands the problems to which the theory provides solutions. (104)

Pinker complains about others who seem to have not read Chomsky, but I get the sense that Chomsky is the only linguist Pinker has ever read. Because either Pinker knows of other linguistic theories and he’s not telling (i.e., he’s being deceptive) or he doesn’t know of them at all (i.e., he’s hasn’t done his research). Either way, it’s poor scholarship. As we’ll see in the next post, Pinker knows of Sampson’s theory and he uses examples from Sampson’s book without acknowledgment. That’s also poor scholarship, but of the kind that is common to Chomskers.

References

McWhorter, John. 2004. “When Language Began”. The Story of Human Language. The Great Courses: The Teaching Company. Course No. 1600.

Pinker, Steven. 1994. The Language Instinct: The New Science of Language and Mind. Penguin Group: London.

Seuren, Pieter A. M. 2004 (1998). Western Linguistics: an Historical Introduction. Oxford; Malden (MA): Blackwell.

 

 

 

Up next: A review of The Language Instinct Debate by Geoffrey Sampson.

 
[Update – This post originally had Noam Chomsky’s name written as “Chompsky”. Oops. Hehe. A word to the wise: Before adding words to your word processor’s dictionary, make sure they’re spelled correctly. Hat tip to Angela for pointing out the mistake.]

Book Review: The Secret Life of Pronouns by James W. Pennebaker

In the last paragraph of the first chapter of The Secret Life of Pronouns, James Pennebaker makes a confusing statement: “If you are a serious linguist, this book may disappoint or infuriate you.” This sounds discouraging, especially in these days of pop pseudoscience books, which are all theories and no facts. If Pennebaker is already throwing in the towel to “serious” readers, is it really worth reading on?

The statement is all the more confusing because of what precedes it in the preface. In an explanation of the purpose of his book, Pennebaker says that it is

organized around some of my favorite topics in psychology and the social sciences – personality, gender, deception, leadership, love, history, politics, and groups. The goal is to show how the analysis of function words [like pronouns, articles, and prepositions] can lead to new insights in each of these topics. At the same time, I want you to appreciate ways of thinking about and analyzing language. No matter what your personal or professional interest, I hope you come to see the world differently and can use this knowledge to better understand yourself and others […] Although the analysis of language is the focus of this book, it is really a work of psychology. Whereas linguists are primarily interested in language for its own sake, I’m interested in what people’s words say about their psychological states.

That’s what makes the part about infuriating serious linguists all the more confusing. You might think that Pennebaker is saying linguists can only be interested in language for its own sake (whatever that is), his very next sentences state

Words, then, can be thought of as powerful tools to excavate people’s thoughts, feelings, motivations, and connections with others. With advancements in computer technologies, this approach is informing a wide range of scholars across many disciplines – linguists, neuroscientists, psycholinguists, developmentalists, computer scientists, computational linguists, and others.

So I’m guessing that Pennebaker is trying to guard his book against criticism from serious linguists, but while that may be the case, I think his worry is unfounded. It’s true that there are non-fiction books out there that should do themselves a favor and ask not to be read critically, but it is also true that there are some very interesting linguistics books that can be enjoyed by both the general public and serious linguists alike. Pennebaker’s book falls into the latter category. Even if I feel that some of his analyses called for more detail or data, The Secret Life of Pronouns is after all a book aimed at the general public, not a scholarly article. I would recommend this book to anyone, even serious linguists.

(As a side note, the more I use the term “serious linguist” the more I like it. It’s definitely going in the act, but I’m going to write it Serious Linguist.)

A final interesting thing about The Secret Life of Pronouns is that the accompanying website lets you take a few exercises to see what your words “reveal about you.” Most of them tell you what your own writing says about your personality, but you can also run an email conversation through the machine to see how in sync the people are check the personality behind a Twitter account. The results will straight up tell you not to take them too seriously, but had the crystal ball check Rick Santorum’s Twitter account. It said he rated low in every area of thinking style, but high in the “arrogant/distant” area of social style. Coincidence? You would think Rick Santorum could have hired someone to handle his Twitter account…

 

 

 

Up next: The Language Instinct by Steven Pinker, which is the first post in a series about this topic.

Autocorrected

James Gleick has a recent article in the New York Times about Autocorrect (“Auto Crrect Ths!” – Aug. 4, 2012), that bane of impatient texters and Tweeters everywhere. Besides recounting some of the more hilarious and embarrassing autocorrections made, he very poignantly tells how Autocorrect works and how it is advancing as computers get better at making predictions.

But in the second to last paragraph, he missteps. He writes:

One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.

I don’t know whether Mr. Gleick’s writing was the victim of an editor trying to save space, but that seems unlikely since there’s room on the internet for a bit of qualification, which is what could save these statements from being common cases of declinism. Let me explain.

“People who yesterday unlearned arithmetic” probably refers to the use of calculators. But I would hesitate to say that the power and ubiquity of modern calculators has caused people to unlearn arithmetic. Let’s take a simple equation such as 4 x 4. Anyone conducting such an equation on a calculator knows the arithmetic behind it. If they put it in and the answer comes back as 0 or 8 or 1 or even 20, they are more than likely to realize something went wrong, namely they pressed the minus or plus button instead of the multiplication button. Likewise they know the arithmetic behind 231 x 47.06.

Mr. Gleick writes implies that the efficiency of calculators has caused people to rely too much on them. But this is backwards. The more difficult that calculations get, the more arithmetical knowledge a user is likely to have. Relying on a machine to tell me the square root of 144 doesn’t necessarily mean I “unlearned” arithmetic. It only means that I trust the calculator to give me the correct answer to the equation I gave it. If I trust that I pressed the buttons in the right order, the answer I am given will be sufficient for me, even if I do not know how to work out the equation on pen and paper. I doubt any mathematicians out there are worried about “unlearning” arithmetic because of the power of their calculators. Rather, they’re probably more worried about how to enter the equations correctly. And just like I know 8 is not the answer to 4 x 4, they probably know x = 45 is not the answer to x2 + 2x – 4 = 0.

Taking the analogy to language, we see the same thing. Not being able to spell quixotic, but knowing that chaotic is not the word I’m looking for, does not mean that I have lost the ability to spell. It merely means that I have enough trust in my Autocorrect to suggest the correct word I’m looking for. If it throws something else at me, I’ll consult a dictionary.

If the Autocorrect cannot give me the correct word I’m looking for because it is a recent coinage, there may not be a standard spelling yet, in which case I am able to disregard any suggestions. I’ll spell the word as I want and trust the reader to understand it. Ya dig?

None of the infamous stories of Autocorrect turning normal language into gibberish involve someone who didn’t know how to spell. None of them end with someone pleading for the correct spelling of whatever word Autocorrect mangled. As Autocorrect gets better, people will just learn to trust its suggestions more with words that are difficult to spell. This doesn’t mean we have lost the ability to spell. Spelling in English is a tour de force in memorization because the spelling of English words is a notorious mess. If all I can remember is that the word I’m looking for has a q and an x in it, does it really mean I have unlearned how to spell or that I have just forgotten the exact spelling of quixotic and am willing to trust Autocorrect’s suggestion?

Learning arithmetic is learning a system. Once you know how 2 x 2 works, you can multiply any numbers. The English spelling system is nowhere near a system like arithmetic, so the analogy Mr. Gleick used doesn’t really work for this reason either. But there is one thing that spelling and arithmetic have in common when it comes to computers. Calculators and Autocorrect are only beneficial to those who already have at least a basic understanding of arithmetic and spelling. The advance of Autocorrect will have the same effect on people’s ability to spell as the advance of calculators did on people’s ability to do arithmetic, which was not really any at all.

By the way, I once looked up took (meaning the past tense of take) in a dictionary because after writing it I was sure that wasn’t the way to spell it. And that’s my memory getting worse, not my Autocorrect unlearning me.

[Update – Aug. 6, 2012] If our spelling really does go down the drain, it should at least make this kind of spelling bee more interesting (if only it were true).

Poetry and Prose, Computers and Code

Back in February, I analyzed WordPress’s automated grammar checker, After the Deadline, by running some famous and well-regarded pieces of prose through it. I found the program lacking. What I wrote was:

If you have understood this article so far, you already know more about writing than After the Deadline. It will not improve your writing. It will most likely make it worse. Contrary to what is claimed on its homepage, you will not write better and you will spend more time editing.

I think my test of After the Deadline proved its inefficiency, especially since I noticed that the program finds errors in its own suggestions. Talk about needing to heed your own advice…

A comment by one of the program’s developers, Raphael Mudge, however, got me thinking about what benefit (if any) automatic grammar checkers can offer. Mr. Mudge noted that the program was written for bloggers so running famous prose through it was not fair. He is right about that, but as I replied, the problem with automated grammar checkers really lies with the confidence and capability of writers who use them:

[The effect that computer grammar checkers could have on uncertain writers] is even more important when we think of running After the Deadline against a random sample of blog posts, as you suggest. While that would be more fair than what I did, it wouldn’t necessarily tell us anything. What’s needed is a second step of deciding which editing suggestions will be accepted. If we accept only the correct suggestions, we assume an extremely capable author who is therefore not in need of the program. As the threshold for our accepted suggestions lowers, however, we will begin to see a muddying of the waters – the more poorly written posts will be made better, but the more well written posts will be made worse. The question then becomes where do we draw the line on acceptions to ensure that the program is not doing more harm than good? That will decide the program’s worth, in my opinion.

As it turns out, after that review of After the Deadline, I was contacted by someone from Grammarly, another automated grammar checker. For some reason they wanted me to review their program. I said sure, I’d love to, and then I promptly did nothing. In truth, I was sidetracked by other things – kids, work, beer, school, the NHL playoffs, more beer, and recycling. So much for that.

Now R.L.G. over at the Economist’s Johnson blog has a post about these programs and a short discussion of Ben Yagoda’s review of Grammarly at Lingua Franca, a Chronicle of Higher Education blog. I want to quickly review these posts and add to my thoughts about these programs.

First, R.L.G. rightly points out that “computers can be very good at parsing natural language, finding determiners and noun phrases and verb phrases and organising them into trees.” I’m happy to agree with that. Part-of-speech taggers alone are amazing and they open up new ways of researching language. But, as he again rightly points out, “Online grammar coaches and style checkers will be snake oil for some time, precisely due to some of the things that separate formal and natural languages.”

Second, Mr. Yagoda’s review of Grammarly is spot on. (I’m impressed by how much he was able to do with only a five day trial. They gave me three months, Ben. Have your people call mine.) Not to take anything away from Mr. Yagoda, but reviewing these checkers is like shooting fish in a barrel because they’re pretty awful. A rudimentary understanding of writing is enough to protect you from their “corrections”. But it’s the lofty claims of these programs that makes testing them irresistible to people like Mr. Yagoda and myself.

So who uses automated grammar checkers and who could possibly benefit from them? The answer takes us back to the confidence of writers. Obviously, writers like RLG and Ben Yagoda are out of the question. As I noted in my comment to Mr. Mudge, the developer of After the Deadline, “a confident writer doesn’t need computer grammar checkers for a variety of reasons, so it’s the uncertain writers that matter. They may have perfect grammar, but be lead astray by a computer grammar checker.” It’s even worse if we take into account Mr. Yagoda’s point that “when it comes to computer programs evaluating prose, the cards never tell the truth.”

We do not have computers that can edit prose, not even close. What we have right now are inadequate grammar checkers that may be doing more harm than good since the suggestions they make are either useless or flat out wrong. They are also being peddled to writers who may not be able to recognize how bad they are. So there’s a danger that competent but insecure writers will follow the program’s misguided attempts to improve their prose.

It’s strange that Grammarly would ask Mr. Yagoda or myself to review their program since Mr. Yagoda is clearly immune to the program’s snake oil charm and I wasn’t exactly kind to After the Deadline. But such bad business decisions might prove helpful for everyone. Respected writers will point out the inadequacy of these automatic grammar checkers, which will hopefully influence people to not use them. At the same time, until these programs can really prove their worth – or at least not make their inadequacy so glaringly obvious – they will not receive any good press from those who know how to write (nor will they get any from lowly bloggers like myself). In this case, any press is not good press since anyone reading R.L.G. or Ben Yagoda’s discussion of automated grammar checkers is unlikely to use one, especially if they have to pay for it.

[Update – Aug. 9, 2012] R.L.G. at Johnson, the Economist’s language blog that I linked to above, heard from Grammarly’s chief executive about what the program was meant for (“to proofread mainstream text like student papers, cover letters and proposals”). So he decided to put Grammarly through some more tests. Want to guess how it did? Check it.

The Power of Lexicographers

Over on Change.org, there is a petition to change the definition of marriage to “reflect the reality that there is only one kind of marriage — one between two loving adults, regardless of sexual orientation or gender identity.”

This petition highlights a few fascinating things about dictionaries and the power of lexicographers. There are, however, a few things to understand before we get into the harmless drudgery of what’s at stake here.

First, many dictionaries these days are written using a corpus, or a large data bank of texts. The words in the texts are tagged for their part of speech (noun, verb, etc.) to make the corpus more easily searchable. Lexicographers then use the corpora to not only help them define a word, but also (and this is key) to help them rank the different senses of each word’s definition. The more often a sense of a word is used, the higher it will be in the list. This is why Macmillan lists the “financial institution” sense of bank before the “raised area of land along the side of a river” sense.

That’s a very broad way to define what lexicographers do. If you want to know more, I recommend checking out Kory Stamper’s excellent blog, Harmless Drudgery. She is a lexicographer at Merriam-Webster and her posts are a joy to read. What’s important to know is that lexicographers try to be as impartial as possible and they use computers to help them with this. As Ms. Stamper notes in a post full of advice for budding lexicographers, “The number one rule of lexicography is you never, ever intentionally insert yourself into your defining. Your goal as a lexicographer is to write a definition that accurately and concisely conveys how a word is used without distracting the reader with humor.” Or, in this case, malice.

Second, the petition says that “Currently Dictionary.com has two separate definitions for the word marriage — one for heterosexual marriage, and one for same-sex marriage.” That’s not entirely true. Dictionary.com has at least ten definitions for marriage. What the petition is referencing is the two senses of the first definition of marriage. Here’s the screenshot:

Third, the definitions in Dictionary.com come from both “experienced lexicographers” and over fifteen “trusted and established sources including Random House and Harper Collins.” According to them, they are “the world’s largest and most authoritative online dictionary.”* The definition for marriage does not say which dictionary it is pulled from, so I think it’s safe to assume that the lexicographers at Dictionary.com wrote it. It doesn’t really matter, as this post is about lexicography as a whole.

Now that we have an idea about how dictionaries are written and what’s going on at Dictionary.com, we can see the curious nature of the petition. Dictionaries do not tell society how words are defined, rather, for the most part, it is the other way around. If you want to be pedantic about it, you could say that society and dictionaries inform each other. (Let’s not get into the whole prescriptive/descriptive nature and history of dictionary, ok?) So at first the petitioner would seem to be mistaken.

And yet, he has a point. Here’s why.

The difference between a male/female marriage and a male/male or female/female marriage is just that: plus or minus a few letters on either side of the slash mark. No dictionary would list separate senses for marriages between Caucasians and African-Americans or for those between a blue-eyed and green-eyed people, so why bother splitting the definition in terms of gender?

There is also the fact that dictionaries do have some authority. People could defend what’s currently in Dictionary.com’s definition of marriage by saying that it merely reflects the lexicographers’ research into how the word is used (which may be based on a corpus). But with over 100,000 signatures on the petition, dictionaries clearly mean more to people than just a reflection of how we use words. In fact, Stephen Colbert – no stranger to defining words – mentioned this issue on his show in 2009 when he noted that Merriam-Webster’s had included the “same-sex” sense of marriage in a 2003 update to its dictionary. When lexicographers define words, people notice (after six years).

On the other hand, 100,000 speakers of English equates to anywhere from 0.03% to 0.005% of the total population of English speakers worldwide (wildly speculative numbers based on Ethnologue’s estimate of primary speakers to Britannica’s estimate of total speakers). Either way, that’s nowhere near a majority. We should be happy the “same-sex” version of marriage is in there at all.

And then there’s the fact that native speakers do not need a dictionary to define marriage for them. If I told another native speaker over the age of fourteen that Adam and Steve got married, they would understand what I meant and, depending on their political bent, view this as, well, however they wanted to.

But this points out the dilemma that lexicographers face. In my mind, putting the “same-sex” sense of marriage second does not amount to a “brush off” or “blurb” as the petition would have us believe. I wouldn’t accuse lexicographers of doing either for any word in a dictionary, but I would assume they had a good reason to separate the two meanings; namely, the separate but similar (not equal) uses of the word. And yet, some people would take offense because the state of marriage right now is a hot button issue in the United States. Lexicographers are like referees in at least one way: someone is always going to hate them.

The lexicographers for Dictionary.com were most likely well aware that some people may take offense to how they defined marriage, but what were they supposed to do?

Here’s how some other dictionaries handled marriage:

    Macmillan left gender out of the definition, saying just “the relationship between two people who are husband and wife.”

    Merriam-Webster is in the same boat as Dictionary.com, separating the senses in a very similar way.

    The American Heritage Dictionary included the “same-sex” sense in the first sense with an explanation of it being only “in some jurisdictions”

    Oxford English Dictionary included a note to how the term is “sometimes used” today (screen shot below, since it’s behind a pay wall):

Would any of these satisfy everyone? More importantly, do we really want our lexicographers using politics to define words? Haven’t they got enough on their desks already?

Finally, it’s worth noting that the Collins Dictionary defines marriage in the first sense as “the state of being married; relation between spouses; married life; wedlock; matrimony.” It makes you wonder why Dictionary.com didn’t didn’t use that definition and call it a day.

 

 

 

*My apologies to the lexicographers and word smiths who just spit their drink all over their computer screen. That “most authoritative” part was apparently not a joke. Now, let’s all pick our jaws up off the floor and go back up to where we were.

Book Review: The Great Influenza by John M. Barry

Here are the first three paragraphs of The Great Influenza by John M. Barry:

The Great War had brought Paul Lewis into the Navy in 1918 as a lieutenant commander, but he never seemed quite at ease when in his uniform. It never seemed to fit quite right, or to sit quite right, and he was often flustered and failed to respond properly when sailors saluted him.
Yet he was every bit a warrior, and he hunted death.
When he found it he confronted it, challenged it, tried to pin it in place like a lepidopterist pinning down a butterfly, so he could then dissect it piece by piece, analyze it, and find a way to confound it. He did so often enough that the risks he took became routine.

Sounds pretty nails, right? Later, Barry relates the gravity of the subject matter:

And they died with extraordinary ferocity and speed. Although the influenza pandemic stretched over two years, perhaps two-thirds of the deaths occurred in a period of twenty-four weeks, and more than half of those deaths occurred in even less time, from mid-September to early December 1918. Influenza killed more people in a year than the Black Death of the Middle Ages killed in a century; it killed more people in twenty-four weeks than AIDS has killed in twenty-four years.

What. The. Fuck.

Events in the story of the great influenza of 1918 do not get better from there. But Barry’s writing conveys the tense and terrifying nature of what life must have been like then. To wit, from the situation in Philadelphia, things got so bad that people began to steal caskets. And then they got worse:

There were soon no caskets left to steal. Louise Apuchase remembered most vividly the lack of coffins: “A neighbor boy about seven or eight died and they used to just pick you up and wrap you up in a sheet and put you in a patrol wagon. So the mother and father screaming, ‘Let me get a macaroni box’ [for a coffin] – macaroni, any kind of pasta, used to come in this box, about 20 pounds of macaroni fit in it – ‘please please let me put him in the macaroni box, don’t take him away like that…’”

How I didn’t know about the 1918 influenza pandemic before reading this book is a mystery to me. Fortunately though, Barry’s book grounds the harrowing tales in an account of the politics, the society, and most importantly, the state of medicine at the time. This allows Barry to show that while humans were slaughtering each other in unprecedented ways in World War I, nature was doing just the same with much better results because nature’s battlefield was not limited to the fields of Europe.

The contrasts between the worlds of medicine, politics, and society on one side and the great influenza on the other is sharp, especially since the state of medicine in the decades leading up to the pandemic might be scarier than the influenza itself. In the United States, medical students did not even learn how to use microscopes. They spent eight months in medical school, graduated sometimes without grades or by passing only four of their nine courses, and were unleashed to “practice” medicine on society (since they certainly didn’t practice any of it in the university). The true power of these contrasting worlds, however, benefits the reader when Barry shows how the worlds of politics and society that we think we live in are subject to the terrors that nature occasionally wreaks.

Here is another sobering passage from the book, just in case you thought the terror was confined to Philadelphia:

During the course of the epidemic, 47 percent of all deaths in the United States, nearly half of all those who died from all causes combined – from cancer, from heart disease, from stroke, from tuberculosis, from accidents, from suicide, from murder, and from all other causes – resulted from influenza and its complications. And it killed enough to depress the average life expectancy in the United States by more than ten years.

All shock-and-awe stories aside, it is Barry’s writing style that makes this book an excellent read. He manages to balance be sober facts and the gravity of the situation with how important and intelligent the scientists fighting the flu were. And he is not afraid to adorn his prose with some philosophical lessons for our day:

Man might be defined as “modern” largely to the extent that he attempts to control, as opposed to adjust himself to, nature. In this relationship with nature, modern humanity has generally been the aggressor and a daring one at that, altering the flow of rivers, building upon geological faults, and, today, even engineering the genes of existing species. Nature has generally been languid in its response, although contentious once aroused and occasionally displaying a flair for violence.
By 1918 humankind was fully modern, and fully scientific, but too busy fighting itself to aggress against nature. Nature, however, chooses its own moments. It chose this moment to aggress against man, and it did not do so prodding languidly. For the first time, modern humanity, a humanity practicing the modern scientific method, would confront nature in its fullest rage.

The second reason I thoroughly enjoyed this book was its span. As I said earlier, in order to talk about the influenza epidemic of 1918, Barry is forced to discuss the history of medicine, the politics and society of the time, and how these relate to today. He addresses each in a clear and concise way that made it easy to balance them in my mind while following the story. They helped to inform the narrative, instead of impinge on it.

Finally, I enjoyed how relevant this topic is these days. I may not have heard about this particular flu epidemic (still not sure how that’s possible), but I have heard about another flu that has people excited. The H5N1 flu strain has people worried that Barry might soon be given enough material to write a sequel, especially since two groups of researchers have published how this strain might potentially infect humans on a large scale. If you haven’t heard of this, the always great Carl Zimmer has the run down here and here.

Barry’s book ends with a discussion of influenza today and H5N1 in particular. In the afterword to my 2005 edition, he asks three questions:

    1. Will another influenza pandemic occur?
    2. If so, how dangerous will it be and what threat does the H5N1 virus present?
    3. How prepared are we for it and what can we do to better prepare?

The short answers to these are yes, best case: 2 – 7.4 million dead, and “at this writing, we are not prepared. At all.”

The afterword was a nice addition to help frame my thoughts on the issue. It made the book more than just a history. But if you read this book and you read the news, I don’t think you will be able to refrain from comparing the events of 1918 to today. And that is another reason I highly recommend this book.

 
 

Up next: A very lengthy review of James W. Pennebaker’s The Secret Life of Pronouns: What Our Words Say About Us.

Whatever Happened to Innovation in the USA?

Or, why the difference between innovation from above and innovation from below matters

In a recent interview on NPR’s Science Friday program, everyone’s favorite astrophysicist* Neil deGrasse Tyson talked about innovation. Tyson has written a new book which discusses, among other things, the way that society benefited from innovation in the space race of the 1960s. With this post I want to tell you about my own experience with innovation in the USA and the two different types of innovation there are, an idea that should be raised more often.

Tyson argues that the innovation needed in space travel – the innovation needed to go farther and farther every single day – brought untold benefits to society through the engineers it needed, the products it created (which were applied elsewhere), through the economy it stimulated, etc. He has a point, but I’m interested to see if he mentions that the reason society benefits from such innovation is because it is the type of innovation that is writ large over society. The space race was innovation on a large scale. It was the driving force (and in some ways the weapon) of the Cold War. Hence everyone in society was indebted to this large scale innovation. When everyone has a stake in the innovation of a country, as they did in the space race, there is a collective agreement of the benefit of innovation. It doesn’t matter what it is, so long as it is innovative.

Innovation on a small scale, however, is a different story. Innovation from below, as it could be called, takes a entirely different mind set. In business, it sometimes comes out of necessity – innovate or go bust. This is similar to the innovation of the space race. But micro-innovation (let’s settle on this term, shall we?) also comes about unforced. Sometimes a clever person, whose business is more or less fine the way it is, recognizes the benefits of an innovative idea and, to use the official business-speak term, capitalizes on it. More often than not, however, micro-innovation is passed over. Allow me to offer an example.

I used to work for a company in the US. This company had a program to welcome new employees into the fold. The program was called something like Welcome, New Employees, Into The Fold™. In this program, new employees were asked to read a chapter from a best-selling business how-to manifesto. The chapter talked about the real-life innovative leader who built an innovative tech company on innovation. Naturally, I assumed the take-away message was supposed to be “Innovation. We like. So should you.” I was informed later that it was “Do as we say, not as our favorite manifesto chapter tells you.” Makes you wonder why they bothered to waste the paper, but then again, that’s what manifestos are all about.

Shortly after I left this company (on good terms), I offered them an innovative way to increase their sales. I would use my training in linguistics to study their marketing campaigns and I would to do it for free. The benefit for me was that my research would allow me to write my master’s thesis. It was a win-win. (I’m intentionally being vague about my master’s thesis since, so far as I can tell, it really is innovative. It’s at least the kind of research that could launch a career in either business or academia, depending on the results. Interested parties can feel free to contact me.)

And yet, like most micro-innovation cases, my idea was denied. It’s hard to believe, I know, but there are some obvious answers as to why. First, business professionals are a cautious to cowardly bunch. If you told them of the chances that they would be killed in a car accident on the way to work, they would find a reason to work from home. So when faced with the opportunity to increase their sales by doing nothing but allowing a post grad student to analyze their marketing texts, they find ways to say no, to brush it off, or to disregard it. Creating one’s own misfortune is not unheard of, even in the business world.

A second reason why my idea was turned down has to do with the “business as usual” mind set. My former employer makes millions each year, They fear change because they assume it’s going to be change for the worse. More importantly, while there’s no telling what kind of profit my innovation could have brought them, it’s safe to assume it would have been in the thousands of dollars. That’s chump change for mid-sized American companies. Why should they take on my idea when business as usual is already bringing in millions?

Finally, and most relevant to the macro-innovation that Mr. Tyson talked about, is the fact that micro-innovation is not established in the US. There is no culture of post graduate students doing research for companies to complete their degree. There is a culture of small innovations making big waves, but these are all either start-ups or internal happenings at large companies (like 3M). There is no zeitgeist of micro-innovation, no pressure from society on creating it day after day, and no agreement that it brings untold benefits to those who seize it. Yet it comes up all the time.

This last notion is related to the type of micro-innovation that comes out of necessity because companies can live or die on it. Most people with an innovative idea have a very good reason for why it will be successful. If they are declined by one company, they are not likely to give up on the idea. They are simply going to move on to the next company. And that spells danger for the companies who passed on the innovation. So instead of companies living by the “be innovative or die” motto, there are also those remembered by the “we died because we were not innovative” warning.

And nobody writes chapters in business books about those companies.


* Except maybe this guy.

Book Review: Babel No More by Michael Erard

In Babel No More, Michael Erard goes on a search for hyperpolyglots – people who are said to speak over six languages. But he also wants to know about the legendary hyperpolyglots who are rumored to speak more than 50, 60, or even 70 languages. You can therefore understand why I approached this book with a healthy dose of skepticism. Do I believe one person can speak 70 languages? No. Do I believe that other people believe that someone can speak 70 languages? Yes. After all, that’s where the legends of these hyperpolyglots came from. But I’ve had training in linguistics. Language is a trickier subject for me.

I was skeptical that Erard would not approach the subject of hyperpolyglots with as much skepticism as me. Fortunately, I was wrong. Erard is on point with the nature of language learners, separating the fact from fiction in the legend of the hyperpolyglot:

The hyperpolyglot embodies both of these poles: the linguistic wildness of our primordial past and the multilingualism of the looming technotopia. That’s why stories circulate about this or that person who can speak an astounding number of languages – such people are holy freaks. Touch one, you touch his power. […] Once you say you speak ten languages, you’ll soon hear the gossip that you speak twenty or forty. That’s why people who speak several languages have been mistrusted as spies; people wonder where their loyalties lie.

Jacket design by Zocalo Design

Needless to say, Erard finds many interesting characters in his journey of hyperpolyglots. It’s part of what makes the book so interesting. But he also gets into how language works. And, thankfully, he doesn’t do it from a best-seller pop-science fascinating-but-total-bullshit way. I can’t tell you how refreshing that is, but I can give you a sample quote:

Language, however, encompasses more than the communicating we sometimes do with it. If language had evolved solely for the means of communication, we’d rarely misunderstand each other. Instead, we have a system in which worlds mean more than one thing, in which one can devise many sentences to capture the same idea, in which one moment of silence means more than a thousand pictures. No animal species could survive this intensity of ambiguity. Moreover, people don’t appreciate how little of our meaning is in our words, even as we decipher hand gestures, facial movements, body postures automatically every day. What we mean is implied by us and then inferred by our listeners.

The real beauty of Babel No More, however, is the way in which Erard demystifies the process of language learning without removing the wonderment that people attribute to it. If anything, when you better understand the multifaceted nature of language learning, you will appreciate it more, while at the same time avoid being fooled into thinking someone could speak 70 languages or that someone who speaks more than three must be a spy.

 
 

Up next: The Great Influenza by John M. Barry.

Santorum Samples Vol. 4

Santorum Samples are passages from Rick Santorum’s It Takes a Family. They are deliberately taken out of context in an attempt to show that Mr. Santorum is a rational human being. Because we all know that when viewed in context, Rick Santorum is a jackass. Find more samples here or just click on the tab above.

From p. 37:

“Irving Kristol once wrote that the most subversive question that can be posed to civilization is: why not? Thanks to the decision of four activist judges in Massachusetts, that is a question we now must face as Americans. In order to answer the question of “Why not?” with respect to same-sex marriage, we have to come to a fuller understanding of what marriage is. Is it simply about publicly honoring a romantic attachment? That’s what the highest court in Ontario, Canada, believes. Just in time for the June wedding season of 2003 that court wrote, as it ruled in favor of same-sex marriages:

Marriage is, without dispute, one of the most significant forms of personal relationship…. Through the institution of marriage, individuals can publicly express their love and commitment to each other. Through this institution, society publicly recognized expressions of love and commitment between individuals, granting them respect and legitimacy as a couple.*

Marriage in this view means nothing more to society, to what we are as a people, and to our future, than making people feel accepted. The state’s interest in promoting and stamping approval upon a marriage starts and stops with tolerance, and therefore it is meaningless.”

Lessons learned:

1. Here’s the game show that is playing in Rick Santorum’s head:

Welcome to Who Defines Marriage?!, the game show where one lucky winner gets to define marriage for the rest of the country! Let’s meet our contestants. Hailing from lovely Everywhere, USA, is Society! Society is reasonably conflicted over this, but no matter – the part of it that opposes same-sex marriage will be dead soon! Next up, we have the Courts! With a flair for the dramatic and a style that is both comfortable and classy, the Courts usually consider their best trait to be the ability to grant equality to all. And last but not least, we have Rick Santorum! Rick believes he deserves to define marriage because, hey, what he thinks is good for (his) God is damn well good for everyone else – whether they like it or not! … And the winner is Rick Santorum!

2. Tolerance makes things meaningless. Sooooo…. Fuck you to all the non-white, non-straight, non-Christian non-male people out there. Society’s tolerance of you is ruining Rick Santorum’s meaning.

3. Blame Canada!
 

 

 

*Also, us here at …And Read All Over would like to send a warm Fuck yeah! to the highest court in Ontario.