April 27, 2011 34 Comments
In this week’s video I discuss my new favorite word — “Granfalloon” — and how identifying yourself with a particular group can distort your thinking.
April 6, 2011 11 Comments
(cross-posted at Rationally Speaking)
For an organ that evolved for practical tasks like avoiding predators, finding food, and navigating social hierarchies, the human brain has turned out to be surprisingly good at abstract reasoning. Who among our Pleistocene ancestors could have dreamed that we would one day be using our brains not to get an apple to fall from a tree, but to figure out what makes the apple fall?
In part, that’s thanks to our capacity for metaphorical thinking. We instinctively graft abstract concepts like “time,” “theories,” and “humor” onto more concrete concepts that are easier to visualize. For example, we talk about time as if it were a physical space we’re traveling through (“We’re approaching the end of the year”), a moving entity (“Time flies”) or as a quantity of some physical good (“We’re running out of time”). Theories get visualized as structures — we talk about building a case, about supporting evidence, and about the foundations of a theory. And one of my favorite metaphors is the one that conceives of humor in terms of physical violence. A funny person “kills” us or “slays” us, witty humor is “sharp,” and what’s the name for the last line of a joke? The “punch” line.
Interestingly, a lot of recent research suggests that these metaphors operate below the level of conscious thought. In one study, participants who were asked to recall a past event leaned slightly backwards, while participants who were asked to anticipate a future event leaned slightly forwards. Other studies have shown that our metaphorical use of temperature to describe people’s demeanors (as in, “He greeted me warmly,” or “He gave me the cold shoulder”) is so deep-seated, we actually conflate the two. When people are asked to recall a time when they were rejected by their peers, and then asked to estimate the temperature of the room they’re sitting in, their average estimate is several degrees colder than that of people who were asked to recall being welcomed by their peers. And in one study that asked participants to read the dossier of an imaginary job applicant and then rate his or her personality, participants who had just been holding a hot object rated the imaginary applicant as being friendlier, compared to participants who had just been holding a cold object.
Another classic example is the “morality is cleanliness” metaphor. We talk about people having a clean record or a tarnished one, about dirty dealings and coming clean. And of course, religions are infused with this metaphor — think of baptism, the washing away of sin. One clever study published in Science in 2006 showed how deep-seated this metaphor is by dividing participants into two groups: those in the first group were asked to reflect on something virtuous they’d done in the past, and those in the second group were asked to reflect on a past moral transgression. Afterwards, each participant was offered a token gift of either a pencil or a package of antiseptic wipes. The result? Those who had been dwelling on their past wrongdoing were twice as likely to ask for the antiseptic wipes.
Associating the future with the forward direction and the past with the backwards direction seems pretty harmless. But cases like “morality equals cleanliness” start to suggest how dangerous metaphorical thinking can be. If people conflate dirtiness with immorality, then the feeling of “Ugh, that’s disgusting” becomes synonymous with the judgment, “That’s immoral.” Which is likely a reason why so many people insist that homosexuality is wrong, even though they can’t come up with any explanation of why it’s harmful — any non-contrived explanation, at least. As the research of cognitive psychologist Jonathan Haidt has shown, people asked to defend their purity-based moral judgments reach for logical explanations, but if they’re forced to admit that their explanation has holes, they’ll grope for an alternative one, rather than retracting their initial moral judgment. Logic is merely a fig leaf; disgust is doing all the work.
Although I haven’t seen any studies on it yet, I’m willing to wager that researchers could demonstrate repercussions of another common metaphor: the “argument is war” metaphor, which manifests in the way we talk about “attacking” an idea, “shooting down” arguments, and “defending” a position. Thinking of arguments as battles comes with all sorts of unhelpful baggage. It’s zero-sum, meaning that one person’s gain is necessarily the other’s loss. That precludes any view of the argument as a collaborative effort to find the truth. The war-metaphor also primes us emotionally, stimulating pride, aggression, and the desire to dominate — none of which are conducive to rational discussion.
So far I’ve been discussing implicit metaphors, but explicit metaphors can also lead us astray without us realizing it. We use one thing to metaphorically stand in for another because they share some important property, but then we assume that additional properties of the first thing must also be shared by the second thing. For example, here’s a scientist explaining why complex organisms were traditionally assumed to be more vulnerable to genetic mutations, compared to simpler organisms: “Think of a hammer and a microscope… One is complex, one is simple. If you change the length of an arbitrary component of the system by an inch, for example, you’re more likely to break the microscope than the hammer.”
That’s true, but the vulnerable complexity of a microscope isn’t the only kind of complexity. Some systems become more robust to failure as they become more complex, because of the redundancies that accrue — if one part fails, there are others to compensate. Power grids, for example, are built with more power lines than strictly necessary, so that if one line breaks or becomes overloaded, the power gets rerouted through other lines. Vulnerability isn’t a function of complexity per se, but of redundancy. And just because an organism and a microscope are both complex, doesn’t mean the organism shares the microscope’s low redundancy.
Abstinence-only education is a serial abuser of metaphors. There’s one particularly unlovely classroom demonstration in which the teacher hands out candies to her students with the instructions to chew on them for a minute, then spit them back out and rewrap them. She then collects the rewrapped candies in a box and asks a student if he would like to pick one out and eat it. Disgusted, of course, he declines. The message is clear: no one wants “candy” that’s already been tasted by someone else.
In this case, there’s a similarity between the already-chewed candy and a woman who has had previous lovers: both have already been enjoyed by someone else. It’s evident why the act of enjoying a piece of candy diminishes its value to other people. But it’s not evident why the act of sexually enjoying a woman diminishes her value to other people, and no argument is given. The metaphor simply encourages students to extrapolate that property unquestioningly from candy to women.
I came across a great example of misleading metaphors recently via Julian Sanchez, who was complaining about the way policy discussions are often framed in terms of balancing a scale. People will talk about “striking a balance” between goods like innovation and stability, efficiency and equality, or privacy and security. But the image of two goods on opposite ends of a balance implies that you can’t get more of one without giving up the same amount of the other, and that’s often not true. “In my own area of study, the familiar trope of ‘balancing privacy and security’ is a source of constant frustration to privacy advocates,” Sanchez says, “because while there are clearly sometimes trade-offs between the two, it often seems that the zero-sum rhetoric of ‘balancing’ leads people to view them as always in conflict.”
Sometimes, the problem with a metaphor is simply that it’s taken too literally. More than once in a discussion about the ethics of eating animals, someone has told me, “Plants want to live just like animals do, so how can vegans justify eating plants?” The motivation for this point is obvious: if you’re going to be causing suffering no matter what you eat, then you might as well just eat whatever you want. I thought it was risible when I first heard it, but astonishingly, this argument has appeared in the New York Times twice in recent memory — once in December 2009 (“Sorry Vegans: Brussels Sprouts Like to Live, Too”) and then again in March 2011 (“No Face, but Plants Like Life Too”). The articles employ a liberal amount of metaphorical language in describing plants: They “recognize” different wavelengths of light, they “talk” and “listen” to chemical signals, and a plant torn from the ground “struggles to save itself.”
It’s evocative language, but it doesn’t change the fact that plants lack brains and are therefore not capable of actually, non-metaphorically wanting anything. In fact, we use the same sort of language to talk about inanimate objects. Water “wants” to flow downhill. Nature “abhors” a vacuum. A computer “reads” a file and stores it in its “memory.” Since our ancestors’ genetic fitness depended on their sensitivity to each other’s mental states, it feels very natural for us to speak about plants or inanimate objects as if they were agents with desires, wills, and intentions. That’s the kind of thinking we’re built for.
In fact, even the phrase “built for” relies on the implication of a conscious agent doing the building, rather than the unconscious process of evolution. Metaphors are so deeply embedded in our language that it would be difficult, if not impossible, to think without them. And there’s nothing wrong with that, as long as we keep a firm grasp — metaphorically speaking — on what they really mean.
January 9, 2011 1 Comment
Historians’ Fallacies has a bunch of good examples of hindsight bias in writing history. Writing history, we often make events seem really significant if they turned out to play an important role in the course of history, but often those events went largely unnoticed at the time. Art historian Bernard Berenson summed up this fallacious way of thinking nicely: “Significant events are those events that have contributed to making us what we are today.”
For example, even though historians often describe the buildup to WWI as a series of “mounting tensions” and “escalating crises,” the war was actually much more of a surprise than we think. As Nicholas Nassim Taleb describes in The Black Swan, historian Niall Ferguson demonstrated this cleverly by examining the prices of imperial bonds — imperial bond prices normally decline if investors anticipate a war, because wars cause deficits, but they show no such decline in the months before WWI.
You can also see the historian’s fallacy sometimes in the way historians write about a historical figure’s behavior and motivations as if he knew the role he played in history. (For example, writing about John Adams as if he thought of himself as a Founding Father, and interpreting his speeches and letters as if he knew how the Republic would develop.) But I kind of like the way some historians have found to compensate: the “fog-of-war” technique, in which they give their readers only as much information about the unfolding situation as the historical figure himself knew at that time.
January 9, 2011 Leave a comment
A nice tip from Historians’ Fallacies: Watch out for pseudo-evidence, facts that are being used to support a claim but which could just as easily have been used to support that claim’s inverse. The book gives the example of one historian who made the claim that the early American colonists regularly threw trash on their streets. How do we know this? Well, the historian argues: New Amsterdam passed a law against littering in 1657, which was enforced on at least one person; and the city instituted weekly trash removal in 1670.
Notes Fischer: “Each of these impressionistic snippets of pseudo-factual information is consistent with a thesis that (1) the streets of New Amsterdam were knee-deep in trash, or (2) the streets of New Amsterdam were kept spotlessly clean by the tidy dutch inhabitants, by means of laws which were enforced and by regular trash removal; or (3) any statement between these two extremes.”