Oh Sidney, you wag

A linguistics professor lecturing at Oxford explained that although there are many languages in which a double negative implies a positive, there is no language in which a double positive implies a negative.

He was interrupted by legendary philosopher Sidney Morgenbesser, who piped up dismissively from the audience, “Yeah, yeah.”

Computer Learns to Tell Dirty Jokes

Finally, a computer capable of passing MY Turing test. Two computer scientists at the University of Washington are creating an algorithm teaching a computer to make “That’s what she said” jokes:

They then evaluated nouns, adjectives and verbs with a “sexiness” function to determine whether a sentence is a potential TWSS. Examples of nouns with a high sexiness function are “rod” and “meat”, while raunchy adjectives are “hot” and “wet”.

Their automated system, known as Double Entendre via Noun Transfer or DEviaNT, rates sentences for their TWSS potential by looking for particular elements such as nouns that can be interpreted in multiple ways. The researchers trained DEviaNT by gathering jokes from twssstories.com and non-TWSS text from sites such as wikiquote.org.

First, the name is outstanding. They report 70% accuracy, but they expect the number to improve with more data to draw on. Of course, 70% success isn’t that bad considering how often my friends make questionable TWSS jokes. (Hint: if you have to explain the context in which she said it, you’ve probably failed.)

In case this isn’t your particular style of humor, they’re hoping it’ll be able to learn new types of humor based on the metaphor mapping.

Suddenly “I can’t let you do that, Dave” has a whole new meaning…

The Perils of Metaphorical Thinking

(cross-posted at Rationally Speaking)

For an organ that evolved for practical tasks like avoiding predators, finding food, and navigating social hierarchies, the human brain has turned out to be surprisingly good at abstract reasoning. Who among our Pleistocene ancestors could have dreamed that we would one day be using our brains not to get an apple to fall from a tree, but to figure out what makes the apple fall?

In part, that’s thanks to our capacity for metaphorical thinking. We instinctively graft abstract concepts like “time,” “theories,” and “humor” onto more concrete concepts that are easier to visualize. For example, we talk about time as if it were a physical space we’re traveling through (“We’re approaching the end of the year”), a moving entity (“Time flies”) or as a quantity of some physical good (“We’re running out of time”). Theories get visualized as structures — we talk about building a case, about supporting evidence, and about the foundations of a theory. And one of my favorite metaphors is the one that conceives of humor in terms of physical violence. A funny person “kills” us or “slays” us, witty humor is “sharp,” and what’s the name for the last line of a joke? The “punch” line.

Interestingly, a lot of recent research suggests that these metaphors operate below the level of conscious thought. In one study, participants who were asked to recall a past event leaned slightly backwards, while participants who were asked to anticipate a future event leaned slightly forwards. Other studies have shown that our metaphorical use of temperature to describe people’s demeanors (as in, “He greeted me warmly,” or “He gave me the cold shoulder”) is so deep-seated, we actually conflate the two. When people are asked to recall a time when they were rejected by their peers, and then asked to estimate the temperature of the room they’re sitting in, their average estimate is several degrees colder than that of people who were asked to recall being welcomed by their peers. And in one study that asked participants to read the dossier of an imaginary job applicant and then rate his or her personality, participants who had just been holding a hot object rated the imaginary applicant as being friendlier, compared to participants who had just been holding a cold object.

Another classic example is the “morality is cleanliness” metaphor. We talk about people having a clean record or a tarnished one, about dirty dealings and coming clean. And of course, religions are infused with this metaphor — think of baptism, the washing away of sin. One clever study published in Science in 2006 showed how deep-seated this metaphor is by dividing participants into two groups: those in the first group were asked to reflect on something virtuous they’d done in the past, and those in the second group were asked to reflect on a past moral transgression. Afterwards, each participant was offered a token gift of either a pencil or a package of antiseptic wipes. The result? Those who had been dwelling on their past wrongdoing were twice as likely to ask for the antiseptic wipes.

Associating the future with the forward direction and the past with the backwards direction seems pretty harmless. But cases like “morality equals cleanliness” start to suggest how dangerous metaphorical thinking can be. If people conflate dirtiness with immorality, then the feeling of “Ugh, that’s disgusting” becomes synonymous with the judgment, “That’s immoral.” Which is likely a reason why so many people insist that homosexuality is wrong, even though they can’t come up with any explanation of why it’s harmful — any non-contrived explanation, at least. As the research of cognitive psychologist Jonathan Haidt has shown, people asked to defend their purity-based moral judgments reach for logical explanations, but if they’re forced to admit that their explanation has holes, they’ll grope for an alternative one, rather than retracting their initial moral judgment. Logic is merely a fig leaf; disgust is doing all the work.

Although I haven’t seen any studies on it yet, I’m willing to wager that researchers could demonstrate repercussions of another common metaphor: the “argument is war” metaphor, which manifests in the way we talk about “attacking” an idea, “shooting down” arguments, and “defending” a position. Thinking of arguments as battles comes with all sorts of unhelpful baggage. It’s zero-sum, meaning that one person’s gain is necessarily the other’s loss. That precludes any view of the argument as a collaborative effort to find the truth. The war-metaphor also primes us emotionally, stimulating pride, aggression, and the desire to dominate — none of which are conducive to rational discussion.

So far I’ve been discussing implicit metaphors, but explicit metaphors can also lead us astray without us realizing it. We use one thing to metaphorically stand in for another because they share some important property, but then we assume that additional properties of the first thing must also be shared by the second thing. For example, here’s a scientist explaining why complex organisms were traditionally assumed to be more vulnerable to genetic mutations, compared to simpler organisms: “Think of a hammer and a microscope… One is complex, one is simple. If you change the length of an arbitrary component of the system by an inch, for example, you’re more likely to break the microscope than the hammer.”

That’s true, but the vulnerable complexity of a microscope isn’t the only kind of complexity. Some systems become more robust to failure as they become more complex, because of the redundancies that accrue — if one part fails, there are others to compensate. Power grids, for example, are built with more power lines than strictly necessary, so that if one line breaks or becomes overloaded, the power gets rerouted through other lines. Vulnerability isn’t a function of complexity per se, but of redundancy. And just because an organism and a microscope are both complex, doesn’t mean the organism shares the microscope’s low redundancy.

Abstinence-only education is a serial abuser of metaphors. There’s one particularly unlovely classroom demonstration in which the teacher hands out candies to her students with the instructions to chew on them for a minute, then spit them back out and rewrap them. She then collects the rewrapped candies in a box and asks a student if he would like to pick one out and eat it. Disgusted, of course, he declines. The message is clear: no one wants “candy” that’s already been tasted by someone else.

In this case, there’s a similarity between the already-chewed candy and a woman who has had previous lovers: both have already been enjoyed by someone else. It’s evident why the act of enjoying a piece of candy diminishes its value to other people. But it’s not evident why the act of sexually enjoying a woman diminishes her value to other people, and no argument is given. The metaphor simply encourages students to extrapolate that property unquestioningly from candy to women.

I came across a great example of misleading metaphors recently via Julian Sanchez, who was complaining about the way policy discussions are often framed in terms of balancing a scale. People will talk about “striking a balance” between goods like innovation and stability, efficiency and equality, or privacy and security. But the image of two goods on opposite ends of a balance implies that you can’t get more of one without giving up the same amount of the other, and that’s often not true. “In my own area of study, the familiar trope of ‘balancing privacy and security’ is a source of constant frustration to privacy advocates,” Sanchez says, “because while there are clearly sometimes trade-offs between the two, it often seems that the zero-sum rhetoric of ‘balancing’ leads people to view them as always in conflict.”

Sometimes, the problem with a metaphor is simply that it’s taken too literally. More than once in a discussion about the ethics of eating animals, someone has told me, “Plants want to live just like animals do, so how can vegans justify eating plants?” The motivation for this point is obvious: if you’re going to be causing suffering no matter what you eat, then you might as well just eat whatever you want. I thought it was risible when I first heard it, but astonishingly, this argument has appeared in the New York Times twice in recent memory — once in December 2009 (“Sorry Vegans: Brussels Sprouts Like to Live, Too”) and then again in March 2011 (“No Face, but Plants Like Life Too”). The articles employ a liberal amount of metaphorical language in describing plants: They “recognize” different wavelengths of light, they “talk” and “listen” to chemical signals, and a plant torn from the ground “struggles to save itself.”

It’s evocative language, but it doesn’t change the fact that plants lack brains and are therefore not capable of actually, non-metaphorically wanting anything. In fact, we use the same sort of language to talk about inanimate objects. Water “wants” to flow downhill. Nature “abhors” a vacuum. A computer “reads” a file and stores it in its “memory.” Since our ancestors’ genetic fitness depended on their sensitivity to each other’s mental states, it feels very natural for us to speak about plants or inanimate objects as if they were agents with desires, wills, and intentions. That’s the kind of thinking we’re built for.

In fact, even the phrase “built for” relies on the implication of a conscious agent doing the building, rather than the unconscious process of evolution. Metaphors are so deeply embedded in our language that it would be difficult, if not impossible, to think without them. And there’s nothing wrong with that, as long as we keep a firm grasp — metaphorically speaking — on what they really mean.

Sarcasm, Hidden Meanings, and Politeness

So much of communication is not about what we say directly, but about the implications of how we choose to convey the information. Most of my day job revolves around crafting a sentence’s literal content so that the audience/readers will most likely understand my intended, implied message.

What fascinates me is how easily we can understand a person’s intended message from even drastically different literal content.

Take the sentence “Your dog is very happy right now.” The literal meaning is obvious: the dog is happy! But what if it came right after you ask your friend, “What happened to my roast beef sandwich?” Suddenly, the intended message changes: the treacherous dog ate your sandwich! We’re able to draw the correct implication, but how?

More: Grice’s Conversation Maxims, implied meanings, & comments:

Bisesquiquinquenniums and People’s 10 Favorite Words

Julia and I were raised in a household in which our dad would gleefully use words like “Bisesquiquinquenniums” (see if you can parse it, answer is beneath the fold). One iconic memory we have is of Dad running down the stairs excitedly saying, “Kids! The Occultation of Regulus is at hand!” (Yes, it’s a real astronomical occurrence.)

So it was with particular appreciation that I came across the Merriam Webster list of People’s Top 10 Favorite Words:

1 ) Defenestration: a throwing of a person or a thing out of a window; or a usually swift expulsion or dismissal
2 ) Flibbertigibbet: a silly flighty person
3 ) Kerfuffle: disturbance; fuss
4 ) Persnickety: fussy about small details; fastidious
5 ) Callipygian: having shapely buttocks
6 ) Serendipity: luck that takes the form of finding valuable or pleasant things that are not looked for
7 ) Mellifluous: having a smooth rich flow
8 ) Discombobulated: upset; confused
9 ) Palimpsest: writing material used one or more times after earlier writing has been erased; or, something with diverse layers or aspects apparent beneath the surface
10 ) Sesquipedalian: long; characterized by the use of long words

All of these words are excellent. The last one would be a clue about “Bisesquiquinquennium”, except that the short definition doesn’t include the literal meaning.

Click here to find out and comment:

Map and Territory: Navigating Language

Three philosophy grad students were stranded on an abandoned island. They started wandering around exploring, making a map of the territory. To make it easier to talk about, they labeled the northern part of the island “Section A” and the southern part “Section B”, writing it in big letters on the top and bottom of the map.

After exploring a bit, Chris called out excitedly. “I found a radio in Section A! Check it out, we’re saved!” His friends came running.

“This is in Section B, not Section A,” said Bruce. “It’s south of the tree line, which is the obvious division between north and south.”
“Of course it’s Section A,” replied Alice. “This is north of the river, which is the way to divide the island.”

Chris shrugged. “I guess we never decided exactly what the border was; I just assumed we were using the river. It’s not like the radio moves based on what section we call this. We agree that it’s north of the river and south of the trees. It’s Section A if we use the river, Section B if we use the forest. Let’s just decide to use one or the other. Neither way is ‘right’ or ‘wrong’; we’re making them up.”

Alice and Bruce weren’t buying it.

“What do you mean, we’re just making it up? The forest is real and the river is real. One of them makes the real boundary between Section A and Section B!”

Chris sat down to use the radio to call for help, leaving his two friends to their bickering.


Alice and Bruce were experiencing the “map and territory” confusion. A map is a mind-made categorization of real things in the territory. It doesn’t make sense to say that the decision to divide the territory in one way or another is “right” or “wrong”, only more or less useful in different contexts.

This comes up all too often in language. Like sections on a map, words are societal tools we use to categorize and communicate the real things we experience. Our society has some well-defined words like ‘hydrogen’ – we have a good shared understanding of exactly which conditions must be met to determine whether or not we should call something ‘hydrogen’.

But the boundaries around other words are hazier. People argue over whether to call something ‘love’, whether to call it ‘art’, and (one of the most contentious) whether to call it ‘moral’. By some definitions, a urinal on a pedestal qualifies as art, by other definitions it doesn’t. Society hasn’t agreed upon clear-cut boundaries for which feelings, objects, or actions fit into those categories. But the arguments are not about reality itself – they’re over the labels, the language map.

Read more of this post

Stuck in Your Head: Communicating Badly

If I’ve learned anything from blogging, it’s that I’m not writing for myself. Maybe other people treat their blogs as diaries, but I see blogging as a way to communicate my ideas to others. No surprise there; it’s what I do. I’m a communications director at heart, not just for a job. When I see crappy communication causing confusion or controversy, I feel compelled to counter it. (Ok, that alliteration WAS for me.)

Case in point: a poorly-received joke at the awesome and inspiring Southeast Regional Atheist Meetup resulted in a women running from the room in tears. I was at the event but missed that part, so I’m piecing the story together. During a heated discussion about how to make women feel more comfortable at atheist events, a visibly frustrated woman asked why the panel kept using the word ‘females’, since that made her feel like they were discussing livestock instead of people. Someone on the panel joked “What do you want us to say, ‘the weaker sex?’”

I’m pretty sure the panelist intended it to be funny. But the result was that the woman – who already felt marginalized and dismissed – got fed up and left crying. Probably not the result he intended. That’s the question we need to ask ourselves: What do we want to accomplish with what we’re saying?

A trick I’ve found to communicating well is to get out of your head. It’s tempting argue the way you find persuasive, make references you understand, and tell jokes you find funny. That’s great – if you’re talking to people like yourself. Communicating isn’t simply a matter of expressing a thought. It’s about having other people understand the message the way you want. And that requires us to take into account who we’re talking to and how they’re likely to take different statements. The “I’ll say what I think, damn the consequences!” attitude has never made much sense to me.

Before I’m accused of being an accomodationist – whatever that word means these days – I’m NOT saying we have to change the substance of our message to pander to people. Sometimes the desired goal IS to have others see us disrespecting their “sacred” cows. But there’s a difference between the substance of a point and the way to make it understood. It’s only rational to adjust our tactic to our audience – or audiences. Once we choose the substance of our message, we should figure out how to make people hear it. In a nutshell: make sure you’re being effective.

Read more of this post

Policing our language

I got together with some friends to play a game of Fiasco the other night. As the game-coordinator was explaining the rules, he said, “Okay, so half the dice are black and half the dice are white. During a scene, any one of you has the option to decide whether the scene concludes well or poorly for the central character — just indicate your choice by placing one of the dice in front of that character. A white die means the scene turns out well, and a black die means it turns out poorly.”

One of the other players interjected: “Could we please not use that color-coding system, where white means good, and black means bad?” The game coordinator quickly replied, “Oh, sure. Let’s switch it around.”

I understood what she was getting at, and it struck me as a case of political correctness carried to an extreme. People’s associations of white/good, black/bad pre-date any intermingling of white people and black people, so it seems absurd to attribute those associations to racism. A more plausible explanation is that the associations with white and black originated in the fact that the blackness of nighttime means we are colder and more vulnerable to danger than we are in the light of day. So this girl’s suggestion that the color coding system was offensive struck me, at the time, as overreaching.

But talking about it later with a friend I re-considered. Even if I’m right about the origins of the white/good black/bad associations, isn’t it still possible that those associations have a subtle impact on how we view white people vs. black people? If we’re used to associating white-the-color with goodness and black-the-color with badness, then is it so implausible to think we might unconsciously apply some of those associations to white-the-race and black-the-race, too?

The question also reminds me of the debate over gender-neutral pronouns. The idea of using “he” as the generic pronoun, to refer to a person of unspecified gender, has been accused of being sexist. Attempts to coin a new pronoun for use in such cases have so far been a failure (en? hir? hesh? hizer? hirm? sheehy?).

And my initial inclination is just to say, look, everyone understands that when we say “mankind” we mean all men AND women, and that when we say “fireman” we mean a firefighter of either gender. Right? I mean, why change the way we speak?

But now I’m a little more willing to believe that there could be a small effect of our language on the way we think. If the mental picture you get when you say “mankind” is of men, then mightn’t you be more inclined to think of women as incidental to the course of history? If your mental picture of a firemanr is always a man, then mightn’t you be more inclined to think men make better firefighters? Or to think it’s unfeminine to fight fires because you associate that activity with men (due to our language)?

I’m not even making the argument that this effect does exist, only that it’s a reasonable hypothesis. And I’m also not making the argument that it would be worth the trouble to rewire our language in the hopes of rewiring our brains. But I am acknowledging that it’s not entirely ludicrous, politically-correct histrionics to say that there could be a causal relationship between these words and our perceptions of the world.

Follow

Get every new post delivered to your Inbox.

Join 487 other followers

%d bloggers like this: