All Wikipedia Roads Were Forced to Philosophy

Does everything boil down to philosophy? A case could be made that it’s really all about math and science. Or perhaps breasts. In the alt-text of Wednesday’s XKCD comic, a specific challenge was made: “Wikipedia trivia: if you take any article, click on the first link in the article text not in parentheses or italics, and then repeat, you will eventually end up at ‘Philosophy’.”

Game on. I already had a tab open to the Wikipedia page for “Where Mathematics Comes From” and decided to see how long it took:

Where Mathematics Comes From

  1. George Lakoff
  2. Cognitive linguistics
  3. Linguistics
  4. Human
  5. Taxonomy
  6. Science
  7. Knowledge
  8. Fact
  9. Information
  10. Sequence
  11. Mathematics
  12. Quantity
  13. Property (Philosophy)
  14. Modern Philosophy
  15. Philosophy

Ok, maybe that one was too easy. Let’s use my go-to example: Waffles:

Waffle

  1. Batter (cooking)
  2. Flour
  3. Powder
  4. Solid
  5. State of matter
  6. Phase (matter)
  7. Outline of physical science
  8. Natural science
  9. Science
  10. Knowledge
  11. Fact
  12. Information
  13. Sequence
  14. Mathematics
  15. Quantity
  16. Property (Philosophy)
  17. Modern Philosophy
  18. Philosophy

Well son of a gun. I’ve tried it with ‘Mongoose’ (11 clicks), Baltimore Ravens football player ‘Ed Reed’ (16 clicks), and ‘Lord of the Rings’ (22 clicks). All led back to Philosophy.

Well, we don’t END at philosophy – we could keep going. It turns out that (as of writing this) there’s a 19-step loop including philosophy, science, mathematics, and mammory gland.

We could just as easily say that all paths lead to science! Or math! Or breasts!

However, before you get too excited, it turns out there’s some mischief afoot.

First, it’s been two days since the XKCD comic went up, and considering how malleable Wikipedia is, some things have been changed. I was suspicious that Quantity’s first link went to Property (Philosophy) so I checked the history page:

# (cur | prev) 09:54, 25 May 2011 99.186.253.32 (talk) (14,042 bytes) (Edited for xkcd)

# (cur | prev) 09:28, 25 May 2011 146.162.240.242 (talk) (14,004 bytes) (Undid revision 430815864 by Antony-22 (talk) see today’s xkcd, without the “property” link, it breaks the “all pages eventually end up at philosophy ” game. The link should be there)

I actually found a small loop that Male leads to Gender, which leads back to Male. I expect the Male one will be “fixed” at some point (a phrase you don’t hear outside the veterinarian very often).

The philosophy topic has been in the XKCD forums last Sunday, and the idea was around for longer than that. Tricky editing has been going on toward this goal for a while.

I’d heard that philosophy leads to reason, which leads to rationality, which leads back to philosophy. That’s been changed since Wednesday, and I wonder if a deliberate effort moved the path from rationality to breasts.

Yes, it’s true that the first link on an article is likely to be broad and trend toward science/philosophy, but this isn’t unguided evolution. This is intelligently designed.

How to argue on the internet

At least a dozen people have sent this XKCD cartoon to me over the years.

It’s plenty hard enough to get someone to listen to your arguments in a debate, given how attached people are naturally to their own ideas and ways of thinking. But it becomes even harder when you trigger someone’s emotional side, by making them feel like you’re attacking them and putting them automatically into “defend myself” mode (or worse, “lash out” mode), rather than “listen reasonably” mode.

Unfortunately, online debates are full of emotional tripwires, partly because tone isn’t always easy to detect in the written word, and even comments intended neutrally can come off as snide or snippy… and also because not having to say something to someone’s face seems to bring out the immature child inside grown adults.

But on the plus side, debating online at least has the benefit that you can take the time to think about your wording before you comment or email someone. Below, I walk you through my process of revising my wording to reduce the risk of making someone angry and defensive, and increase my chances that they’ll genuinely consider what I have to say.

DRAFT 1 (My first impulse is to say): “You idiot, you’re ignoring…”

Duh. Get rid of the insult.

DRAFT 2: “You’re ignoring…”

I should make it clear I’m attacking an idea, not a person.

DRAFT 3: “Your argument is ignoring…”

This can still be depersonalized. By using the word “your,” I’m encouraging the person to identify the argument with himself, which can still trigger a defensive reaction when I attack the argument. That’s the exact opposite of what I want to do.

DRAFT 4: “That argument is ignoring…”

Almost perfect. The only remaining room for improvement is the word “ignoring,” which implies an intentional disregard, and sounds like an accusation. Better to use something neutral instead:

DRAFT 5: “That argument isn’t taking into account…”

Done.  Of course, chances are I still won’t persuade them, but at least I’ve given myself the best chance possible… and done my part to help keep the Internet civilized. Or at least a tiny bit less savage! 

When Literal Honesty Goes Awry

When is it NOT appropriate to bluntly speak the truth? We’ve all heard someone be insulting and resort to the defense of “Well, it’s true!” Even boring, inoffensive facts can become offensive if brought up inartfully. I think this is a perfect example, illustrated by the hilarious comedy team of David Mitchell and Robert Webb:

I mean, technically it’s true. The literal fact that “anyone we know is unlikely to be the most attractive person on earth” shouldn’t hurt feelings. Nobody should think that much of themselves!

…And yet, it’s rude to say. Why?

I think that’s because nobody took Robert’s original statement “this is the most beautiful woman in the world” at its face value. It violated the maxim of quality – the literal meaning was clearly false so people look for alternative interpretations (“She’s beautiful and I love her” or “She’s very attractive in a combination of ways”).

Since nobody took it seriously at face value, challenges to the claim are perceived as challenging the alternate interpretations rather than the literal meaning. The very decision to call attention to it makes a statement. Why would David be so motivated to discuss her beauty unless he strenuously disagreed with her beauty? So, in essence, he’s saying “No, she’s not very beautiful.”

Yes, David’s literal content is true: she’s not the most beautiful person in the world. But so much of our reaction to a statement is is really a reaction to its implied meaning, and it’s tough to get around that. Initial gut reactions can be powerful.

But it’s possible to do it right. I love having the opportunity to share the awesome and incredible Tim Minchin song If I Didn’t Have You:

Somehow, when Tim does it, the honest approach works better. People often claim that they DO have a soul mate, so it isn’t automatically interpreted as a figure of speech for something more casual.

But it’s particularly important the way he addresses the literal meanings. Compare “I don’t think you’re special. I mean, I think you’re special but not off the charts” with “I don’t think you’re special. I mean, I think you’re special but you fall within a bell-curve.” It’s a strange enough statement to make people think about it harder and realize he’s not being snide.

I found myself thinking of something Steven Pinker wrote in The Stuff of Thought:

The incongruity in a fresh literary metaphor is another ingredient that gives it its pungency. The listener resolves the incongruity soon enough by spotting the underlying similarity, but the initial double take and subsequent brainwork conveys something in addition. It implies that the similarity is not apparent in the humdrum course of everyday life, and that the author is presenting real news in forcing it upon the listener’s attention.

Pinker was writing about using new metaphors to emphasize non-literal meaning, but it works the other way as well. Fresh phrasings – in this case gloriously nerdy ones – make listeners pay more attention to parsing the intended meaning, metaphorical or literal.

If you’re worried about being misinterpreted, try a creative way of expressing the same thought. Protesting “But I was telling the truth!” won’t always be enough.

Oh Sidney, you wag

A linguistics professor lecturing at Oxford explained that although there are many languages in which a double negative implies a positive, there is no language in which a double positive implies a negative.

He was interrupted by legendary philosopher Sidney Morgenbesser, who piped up dismissively from the audience, “Yeah, yeah.”

Computer Learns to Tell Dirty Jokes

Finally, a computer capable of passing MY Turing test. Two computer scientists at the University of Washington are creating an algorithm teaching a computer to make “That’s what she said” jokes:

They then evaluated nouns, adjectives and verbs with a “sexiness” function to determine whether a sentence is a potential TWSS. Examples of nouns with a high sexiness function are “rod” and “meat”, while raunchy adjectives are “hot” and “wet”.

Their automated system, known as Double Entendre via Noun Transfer or DEviaNT, rates sentences for their TWSS potential by looking for particular elements such as nouns that can be interpreted in multiple ways. The researchers trained DEviaNT by gathering jokes from twssstories.com and non-TWSS text from sites such as wikiquote.org.

First, the name is outstanding. They report 70% accuracy, but they expect the number to improve with more data to draw on. Of course, 70% success isn’t that bad considering how often my friends make questionable TWSS jokes. (Hint: if you have to explain the context in which she said it, you’ve probably failed.)

In case this isn’t your particular style of humor, they’re hoping it’ll be able to learn new types of humor based on the metaphor mapping.

Suddenly “I can’t let you do that, Dave” has a whole new meaning…

The Perils of Metaphorical Thinking

(cross-posted at Rationally Speaking)

For an organ that evolved for practical tasks like avoiding predators, finding food, and navigating social hierarchies, the human brain has turned out to be surprisingly good at abstract reasoning. Who among our Pleistocene ancestors could have dreamed that we would one day be using our brains not to get an apple to fall from a tree, but to figure out what makes the apple fall?

In part, that’s thanks to our capacity for metaphorical thinking. We instinctively graft abstract concepts like “time,” “theories,” and “humor” onto more concrete concepts that are easier to visualize. For example, we talk about time as if it were a physical space we’re traveling through (“We’re approaching the end of the year”), a moving entity (“Time flies”) or as a quantity of some physical good (“We’re running out of time”). Theories get visualized as structures — we talk about building a case, about supporting evidence, and about the foundations of a theory. And one of my favorite metaphors is the one that conceives of humor in terms of physical violence. A funny person “kills” us or “slays” us, witty humor is “sharp,” and what’s the name for the last line of a joke? The “punch” line.

Interestingly, a lot of recent research suggests that these metaphors operate below the level of conscious thought. In one study, participants who were asked to recall a past event leaned slightly backwards, while participants who were asked to anticipate a future event leaned slightly forwards. Other studies have shown that our metaphorical use of temperature to describe people’s demeanors (as in, “He greeted me warmly,” or “He gave me the cold shoulder”) is so deep-seated, we actually conflate the two. When people are asked to recall a time when they were rejected by their peers, and then asked to estimate the temperature of the room they’re sitting in, their average estimate is several degrees colder than that of people who were asked to recall being welcomed by their peers. And in one study that asked participants to read the dossier of an imaginary job applicant and then rate his or her personality, participants who had just been holding a hot object rated the imaginary applicant as being friendlier, compared to participants who had just been holding a cold object.

Another classic example is the “morality is cleanliness” metaphor. We talk about people having a clean record or a tarnished one, about dirty dealings and coming clean. And of course, religions are infused with this metaphor — think of baptism, the washing away of sin. One clever study published in Science in 2006 showed how deep-seated this metaphor is by dividing participants into two groups: those in the first group were asked to reflect on something virtuous they’d done in the past, and those in the second group were asked to reflect on a past moral transgression. Afterwards, each participant was offered a token gift of either a pencil or a package of antiseptic wipes. The result? Those who had been dwelling on their past wrongdoing were twice as likely to ask for the antiseptic wipes.

Associating the future with the forward direction and the past with the backwards direction seems pretty harmless. But cases like “morality equals cleanliness” start to suggest how dangerous metaphorical thinking can be. If people conflate dirtiness with immorality, then the feeling of “Ugh, that’s disgusting” becomes synonymous with the judgment, “That’s immoral.” Which is likely a reason why so many people insist that homosexuality is wrong, even though they can’t come up with any explanation of why it’s harmful — any non-contrived explanation, at least. As the research of cognitive psychologist Jonathan Haidt has shown, people asked to defend their purity-based moral judgments reach for logical explanations, but if they’re forced to admit that their explanation has holes, they’ll grope for an alternative one, rather than retracting their initial moral judgment. Logic is merely a fig leaf; disgust is doing all the work.

Although I haven’t seen any studies on it yet, I’m willing to wager that researchers could demonstrate repercussions of another common metaphor: the “argument is war” metaphor, which manifests in the way we talk about “attacking” an idea, “shooting down” arguments, and “defending” a position. Thinking of arguments as battles comes with all sorts of unhelpful baggage. It’s zero-sum, meaning that one person’s gain is necessarily the other’s loss. That precludes any view of the argument as a collaborative effort to find the truth. The war-metaphor also primes us emotionally, stimulating pride, aggression, and the desire to dominate — none of which are conducive to rational discussion.

So far I’ve been discussing implicit metaphors, but explicit metaphors can also lead us astray without us realizing it. We use one thing to metaphorically stand in for another because they share some important property, but then we assume that additional properties of the first thing must also be shared by the second thing. For example, here’s a scientist explaining why complex organisms were traditionally assumed to be more vulnerable to genetic mutations, compared to simpler organisms: “Think of a hammer and a microscope… One is complex, one is simple. If you change the length of an arbitrary component of the system by an inch, for example, you’re more likely to break the microscope than the hammer.”

That’s true, but the vulnerable complexity of a microscope isn’t the only kind of complexity. Some systems become more robust to failure as they become more complex, because of the redundancies that accrue — if one part fails, there are others to compensate. Power grids, for example, are built with more power lines than strictly necessary, so that if one line breaks or becomes overloaded, the power gets rerouted through other lines. Vulnerability isn’t a function of complexity per se, but of redundancy. And just because an organism and a microscope are both complex, doesn’t mean the organism shares the microscope’s low redundancy.

Abstinence-only education is a serial abuser of metaphors. There’s one particularly unlovely classroom demonstration in which the teacher hands out candies to her students with the instructions to chew on them for a minute, then spit them back out and rewrap them. She then collects the rewrapped candies in a box and asks a student if he would like to pick one out and eat it. Disgusted, of course, he declines. The message is clear: no one wants “candy” that’s already been tasted by someone else.

In this case, there’s a similarity between the already-chewed candy and a woman who has had previous lovers: both have already been enjoyed by someone else. It’s evident why the act of enjoying a piece of candy diminishes its value to other people. But it’s not evident why the act of sexually enjoying a woman diminishes her value to other people, and no argument is given. The metaphor simply encourages students to extrapolate that property unquestioningly from candy to women.

I came across a great example of misleading metaphors recently via Julian Sanchez, who was complaining about the way policy discussions are often framed in terms of balancing a scale. People will talk about “striking a balance” between goods like innovation and stability, efficiency and equality, or privacy and security. But the image of two goods on opposite ends of a balance implies that you can’t get more of one without giving up the same amount of the other, and that’s often not true. “In my own area of study, the familiar trope of ‘balancing privacy and security’ is a source of constant frustration to privacy advocates,” Sanchez says, “because while there are clearly sometimes trade-offs between the two, it often seems that the zero-sum rhetoric of ‘balancing’ leads people to view them as always in conflict.”

Sometimes, the problem with a metaphor is simply that it’s taken too literally. More than once in a discussion about the ethics of eating animals, someone has told me, “Plants want to live just like animals do, so how can vegans justify eating plants?” The motivation for this point is obvious: if you’re going to be causing suffering no matter what you eat, then you might as well just eat whatever you want. I thought it was risible when I first heard it, but astonishingly, this argument has appeared in the New York Times twice in recent memory — once in December 2009 (“Sorry Vegans: Brussels Sprouts Like to Live, Too”) and then again in March 2011 (“No Face, but Plants Like Life Too”). The articles employ a liberal amount of metaphorical language in describing plants: They “recognize” different wavelengths of light, they “talk” and “listen” to chemical signals, and a plant torn from the ground “struggles to save itself.”

It’s evocative language, but it doesn’t change the fact that plants lack brains and are therefore not capable of actually, non-metaphorically wanting anything. In fact, we use the same sort of language to talk about inanimate objects. Water “wants” to flow downhill. Nature “abhors” a vacuum. A computer “reads” a file and stores it in its “memory.” Since our ancestors’ genetic fitness depended on their sensitivity to each other’s mental states, it feels very natural for us to speak about plants or inanimate objects as if they were agents with desires, wills, and intentions. That’s the kind of thinking we’re built for.

In fact, even the phrase “built for” relies on the implication of a conscious agent doing the building, rather than the unconscious process of evolution. Metaphors are so deeply embedded in our language that it would be difficult, if not impossible, to think without them. And there’s nothing wrong with that, as long as we keep a firm grasp — metaphorically speaking — on what they really mean.

Sarcasm, Hidden Meanings, and Politeness

So much of communication is not about what we say directly, but about the implications of how we choose to convey the information. Most of my day job revolves around crafting a sentence’s literal content so that the audience/readers will most likely understand my intended, implied message.

What fascinates me is how easily we can understand a person’s intended message from even drastically different literal content.

Take the sentence “Your dog is very happy right now.” The literal meaning is obvious: the dog is happy! But what if it came right after you ask your friend, “What happened to my roast beef sandwich?” Suddenly, the intended message changes: the treacherous dog ate your sandwich! We’re able to draw the correct implication, but how?

More: Grice’s Conversation Maxims, implied meanings, & comments:

Follow

Get every new post delivered to your Inbox.

Join 512 other followers

%d bloggers like this: