Review: The Book of Mormon

(Re-posted with permission from my article in Issue 56 of The Philosopher’s Magazine)

Even if you’ve never watched a single episode of South Park, you’re probably aware that the show’s creators, Trey Parker and Matt Stone, love nothing more than a good bout of sacred cow tipping. Show me an ideology, political, religious or otherwise, and I’ll show you an episode of South Park that lampoons it with the show’s trademark blend of incisive satire and potty humour. So it was surprising that South Park’s terrible twosome wound up creating a smash- hit Broadway musical, which they have been describing, in interviews, as being pro-faith.

Well, the “smash-hit” part isn’t surprising. The Book of Mormon pulls off the impressive trick of winking at the clichés of musical theatre and embracing them at the same time. (After all, the clichés are clichés for a reason – they work.) The story follows two young Mormon men paired together for their mission to Uganda: Kevin, who’s used to being the golden boy and needs to learn that everything’s not always about him, and Arnold, a hapless schmuck who needs to learn some self-confidence. They’re an odd couple and, like all odd couples thrown together under unusual circumstances, they’re going to have to learn to get along. There’s also a sweet Ugandan ingénue, a villainous warlord threatening her village, and a whole lot of really catchy songs. It’s no wonder the musical garnered nine Tony awards this year, including Best Musical, and that it’s been selling out its shows since it opened in previews in February.

But to hear Parker and Stone refer to The Book of Mormon as “pro-faith” was surprising, especially given how often they poke fun at Mormonism. Mormon beliefs can seem so ridiculous to outsiders, in fact, that Parker and Stone wisely realise they don’t need to do much active mocking – instead, they simply step back and let the scripture speak for itself. “I believe,” one missionary warbles in a climactic number reaffirming his commitment to his faith, “that God lives on a planet called ‘Kolob’! And I believe that in 1978, God changed his mind about black people!” With raw material like this, parody is both unnecessary and impossible.

And it’s not just Mormonism that gets skewered. It’s also the self-images of all believers who like to see themselves, and their motivations, as more saintly than they really are. Against a back- drop of war, poverty and disease, one missionary wonders, “God, why do you let bad things happen?” and then adds what is, for many people, the true concern: “More to the point, why do you let bad things happen to me?” There’s only one thing Parker and Stone enjoy sinking their talons into more than absurdity, and that’s hypocrisy.

So in what sense is The Book of Mormon “pro- faith?” Well, it’s affectionate in its portrayal of Mormons as people, most of whom come off as well-meaning, if goofy and often naive. Parker and Stone have made no secret of the fact that they find Mormons just too gosh-darned nice to dislike. But what they’re mainly referring to when they call their musical “pro-faith” is the message it sends the audience home with: that religion can be a powerful and inspiring force for good, as long as you don’t interpret scripture too strictly.

By the end of The Book of Mormon, Africans and missionaries alike are united together in a big happy posse that preaches love, joy, hope and making the world a better place. Having learned by now that it’s more important to help people than to rigidly adhere to dogma, Kevin sings, “We are still Latter Day Saints, all of us. Even if we change some things, or we break the rules, or we have complete doubt that God exists. We can still all work together and make this our paradise planet.”

That’s an appealing sentiment, especially to the sort of theatregoer who prides himself on being progressive and tolerant. It means we can promote all the values we cherish – happiness, freedom, human rights and so on – without ever having to take an unpopular anti-religion stand. But is it plausible? How, exactly, can religion make the world a better place?

I don’t know and, apparently, neither does The Book of Mormon. The central confusion you’ll notice in the musical is that it keeps conflating two very different kinds of “faith”. One could be called “figurative faith”, the warm and fuzzy kind that emerges at the end of the show, which is explicitly about bettering the world but seems to be faith in name only, as it doesn’t involve any actual belief in anything. “What happens when we’re dead? Who cares! We shouldn’t think that far ahead. The only latter day that matters is tomorrow,” the villagers sing. Once you strip away God, and an afterlife, and the requirement of belief in particular dogma, it’s not clear that what’s left bears any resemblance to religion anymore. With its progressive values and its emphasis on the here-and-now rather than the hereafter, it’s basically just humanism.

The other kind of faith in The Book of Mormon is literal faith, but for the most part, it doesn’t actually help anyone. Ugandan sweet- heart Nabalungi believes in salvation in earnest – she’s under the impression that becoming Mormon means she’s going to be transported out of her miserable life to a paradise called “Salt Lake City”, which she imagines must have huts with gold-thatched roofs and “a Red Cross on every corner with all the flour you can eat!” she sings rapturously. But she ends up crushed when she eventually learns that, no, she doesn’t get to leave Uganda after all. (“Of course, Salt Lake City’s only a metaphor,” her fellow tribe members inform her, apparently in figurative faith mode at that point.)

To be fair, there is one example of the power of literal faith in The Book of Mormon. When a villager announces his plans to circumcise his own daughter, and another is about to rape an infant in an attempt to cure himself of AIDS, Arnold manages to stop them by inventing some new scripture for the occasion. “And the Lord said, ‘If you lay with an infant, you shall burn in the fiery pits of Mordor,’” he “reads” from the Bible. (Being a science fiction and fantasy nerd, and having slept through most of Sunday school, Arnold falls back on what he knows.) So I suppose that counts as a point in favour of faith’s power to help the world, albeit conditional on the bleak premise that the only way to get people to stop raping babies and mutilating women is to threaten them with Hell … or Mordor.

Of course, the fact that The Book of Mormon’s views on faith are less than fully coherent doesn’t detract much from the pleasures of its tart- tongued satire, story, and songs. There are just a handful of moments that might raise a philosopher’s eyebrow, such as when everyone sings, in the exuberant final number, “So if you’re sad, put your hands together and pray, that tomorrow’s gonna be a Latter Day. And then it probably will be a Latter Day!” It almost feels churlish to ask “Wait, how does that work?” when everyone onstage is having such a good time singing about joy and peace and brotherhood; nevertheless, one does wonder. Maybe that will be covered in the sequel.

How rationality can make your life more awesome

(Cross-posted at Rationally Speaking)

Sheer intellectual curiosity was what first drew me to rationality (by which I mean, essentially, the study of how to view the world as accurately as possible). I still enjoy rationality as an end in itself, but it didn’t take me long to realize that it’s also a powerful tool for achieving pretty much anything else you care about. Below, a survey of some of the ways that rationality can make your life more awesome:

Rationality alerts you when you have a false belief that’s making you worse off.

You’ve undoubtedly got beliefs about yourself – about what kind of job would be fulfilling for you, for example, or about what kind of person would be a good match for you. You’ve also got beliefs about the world – say, about what it’s like to be rich, or about “what men want” or “what women want.” And you’ve probably internalized some fundamental maxims, such as: When it’s true love, you’ll know. You should always follow your dreams. Natural things are better. Promiscuity reduces your worth as a person.

Those beliefs shape your decisions about your career, what to do when you’re sick, what kind of people you decide to pursue romantically and how you pursue them, how much effort you should be putting into making yourself richer, or more attractive, or more skilled (and skilled in what?), more accommodating, more aggressive, and so on.

But where did these beliefs come from? The startling truth is that many of our beliefs became lodged in our psyches rather haphazardly. We’ve read them, or heard them, or picked them up from books or TV or movies, or perhaps we generalized from one or two real-life examples.

Rationality trains you to notice your beliefs, many of which you may not even be consciously aware of, and ask yourself: where did those beliefs come from, and do I have good reason to believe they’re accurate? How would I know if they’re false? Have I considered any other, alternative hypotheses?

Rationality helps you get the information you need.

Sometimes you need to figure out the answer to a question in order to make an important decision about, say, your health, or your career, or the causes that matter to you. Studying rationality reveals that some ways of investigating those questions are much more likely to yield the truth than others. Just a few examples:

“How should I run my business?” If you’re looking to launch or manage a company, you’ll have a huge leg up over your competition if you’re able to rationally determine how well your product works, or whether it meets a need, or what marketing strategies are effective.

“What career should I go into?” Before committing yourself to a career path, you’ll probably want to learn about the experiences of people working in that field. But a rationalist also knows to ask herself, “Is my sample biased?” If you’re focused on a few famous success stories from the field, that doesn’t tell you very much about what a typical job is like, or what your odds are of making it in that field.

It’s also an unfortunate truth that not every field uses reliable methods, and so not every field produces true or useful work. If that matters to you, you’ll need the tools of rationality to evaluate the fields you’re considering working in. Fields whose methods are controversial include psychotherapy, nutrition science, economics, sociology, consulting, string theory, and alternative medicine.

“How can I help the world?” Many people invest huge amounts of money, time, and effort in causes they care about. But if you want to ensure that your investment makes a difference, you need to be able to evaluate the relevant evidence. How serious of a problem is, say, climate change, or animal welfare, or globalization? How effective is lobbying, or marching, or boycotting? How far do your contributions go at charity X versus charity Y?

Rationality shows you how to evaluate advice.

Learning about rationality, and how widespread irrationality is, sparks an important realization: You can’t assume other people have good reasons for the things they believe. And that means you need to know how to evaluate other people’s opinions, not just based on how plausible their opinions seem, but based on the reliability of the methods they used to form those opinions.

So when you get business advice, you need to ask yourself: What evidence does she have for that advice, and are her circumstances relevant enough to mine? The same is true when a friend swears by some particular remedy for acne, or migraines, or cancer. Is he repeating a recommendation made by multiple doctors? Or did he try it once and get better? What kind of evidence is reliable?

In many cases, people can’t articulate exactly how they’ve arrived at a particular belief; it’s just the product of various experiences they’ve had and things they’ve heard or read. But once you’ve studied rationality, you’ll recognize the signs of people who are more likely to have accurate beliefs: People who adjust their level of confidence to the evidence for a claim; people who actually change their minds when presented with new evidence; people who seem interested in getting the right answer rather than in defending their own egos.

Rationality saves you from bad decisions.

Knowing about the heuristics your brain uses and how they can go wrong means you can escape some very common, and often very serious, decision-making traps.

For example, people often stick with their original career path or business plan for years after the evidence has made clear that it was a mistake, because they don’t want their previous investment to be wasted. That’s thanks to the sunk cost fallacy. Relatedly, people often allow cognitive dissonance to convince them that things aren’t so bad, because the prospect of changing course is too upsetting.

And in many major life decisions, such as choosing a career, people envision one way things could play out (“I’m going to run my own lab, and live in a big city…”) – but they don’t spend much time thinking about how probable that outcome is, or what the other probable outcomes are. The narrative fallacy is that situations imagined in high detail seem more plausible, regardless of how probable they actually are.

Rationality trains you to step back from your emotions so that they don’t cloud your judgment.

Depression, anxiety, anger, envy, and other unpleasant and self-destructive emotions tend to be fueled by what cognitive therapy calls “cognitive distortions,” irrationalities in your thinking such as jumping to conclusions based on limited evidence; focusing selectively on negatives; all-or-nothing thinking; and blaming yourself, or someone else, without reason.

Rationality breaks your habit of automatically trusting your instinctive, emotional judgments, encouraging you instead to notice the beliefs underlying your emotions and ask yourself whether those beliefs are justified.

It also trains you to notice when your beliefs about the world are being colored by what you want, or don’t want, to be true. Beliefs about your own abilities, about the motives of other people, about the likely consequences of your behavior, about what happens after you die, can be emotionally fraught. But a solid training in rationality keeps you from flinching away from the truth – about your situation, or yourself — when learning the truth can help you change it.

What’s so special about living longer?

Atheist death panel: Red America's suspicions confirmed.

After reading about the death panel we held at Skepticon IV last week, a very clever philosopher friend of mine named Henry Shevlin wrote to me with a challenge to the transhumanist perspective. The transhumanist argument, which Eliezer made eloquently in the panel, is that death is a terrible thing that we should be striving to prevent for as long as possible.
Henry asks:

“Is death a tragedy because it involves a possible loss of utility, or because there’s some special harm in the annihilation of the individual? So consider two scenarios… Earth 1B and Earth 2B. Both of them have 100 million inhabitants at any one time. But Earth 1B has a very high life expectancy and a very low birth rate, while Earth 2B has a lower life expectancy and a very high birth rate. Otherwise, though, the two worlds are very similar. Which world is morally superior, by which I mean, generates more utils? “

Good question. Why, exactly, is prolonging existing lives better than creating new lives?

Let’s start with Henry’s Option 1 — that a person’s death is a tragedy because of the loss of the utility that person would have had, if he hadn’t died. Starting with this premise, can we justify our intuition that it’s better to sustain a pre-existing life than to create a new one?

One possible tack is to say that we can only compare utilities of possible outcomes for currently existing people — so the utility of adding a new, happy person to this world is undefined (and, being undefined, it can’t compensate for the utility lost from an existing person’s death). Sounds reasonable, perhaps. But that also implies that the utility of adding a new, miserable person to this world is undefined. That doesn’t sound right! I definitely want a moral theory which says that it’s bad to create beings whose lives are sheer agony.

You might also be tempted to argue that utility’s not fungible between people. In other words, my loss of utility from dying can’t be compensated for by the creation of new utility somewhere else in the world. But that renders utilitarianism completely useless! If utility’s not fungible, then you can’t say that it’s good for me to pay one penny to save you from a lifetime of torture.

Or you could just stray from utilitarianism in this case, and claim that the loss of a life is bad not just because of the loss of utility it causes. That’s Henry’s Option 2 — that death is a tragedy because there’s some special harm in the annihilation of the individual. You could then argue that the harm caused by the death of an existing person vastly outweighs the good caused by creating a new person. I’m uncomfortable with this idea, partly because there doesn’t seem to be any way to quantify the value of a life if you’re not willing to stick to the measuring system of utils. But I’m also uncomfortable with it because it seems to imply that it’s always bad to create new people, since, after all, the badness of their deaths is going to outweigh the good of their lives.

ETA: Of course, you could also argue that you care more about the utils experienced by your friends and family than about the utils that would be experienced by new people. That’s probably true, for most people, and understandably so. But it doesn’t resolve the question of why you should prefer that an unknown stranger’s life be prolonged than that a new life be created.

The Straw Vulcan: Hollywood’s illogical approach to logical decisionmaking

I gave a talk at Skepticon IV last weekend about Vulcans and why they’re a terrible example of rationality. I go through five principles of Straw Vulcan Rationality(TM), give examples from Star Trek and from real life, and explain why they’re mistaken:

  1. Being rational means expecting everyone else to be rational too.
  2. Being rational means you should never make a decision until you have all the information.
  3. Being rational means never relying on intuition.
  4. Being rational means eschewing emotion.
  5. Being rational means valuing only quantifiable things — like money, productivity, or efficiency.

In retrospect, I would’ve streamlined the presentation more, but I’m happy with the content —  I think it’s an important and under-appreciated topic. The main downside was just that everyone wanted to talk to me afterwards, not about rationality, but about Star Trek. I don’t know the answer to your obscure trivia questions, Trekkies!

 

UPDATE: I’m adding my diagrams of the Straw Vulcan model of ideal decisionmaking, and my proposed revisions to it, since those slides don’t appear in the video:

The Straw Vulcan view of the relationship between rationality and emotion.

After my revisions.

RS #48: Philosophical Counseling

Can philosophy be a form of therapy? On the latest episode of Rationally Speaking, we interview Lou Marinoff, a philosopher who founded the field of “philosophical counseling,” in which people pay philosophers to help them deal with their own personal problems using philosophy. For example, one of Lou’s clients wanted advice on whether to quit her finance job to pursue a personal goal; another sought help deciding how to balance his son’s desire to go to Disneyland with his own fear of spoiling his children.

As you can hear in the interview, I’m interested but I’ve got major reservations. I certainly think that philosophy can improve how you live your life — I’ve got some great examples of that from personal experience. But I’m skeptical of Lou’s project for two related reasons: first, because I think most problems in people’s lives are best addressed by a combination of psychological science and common sense. They require a sophisticated understanding how our decision-making algorithms go wrong — for example, why we make decisions that we know are bad for us, how we end up with distorted views of our situations and of our own strengths and weaknesses, and so on. Those are empirical questions, and philosophy’s not an empirical field, so relying on philosophy to solve people’s problems is going to miss a large part of the picture.

The other problem is that it wasn’t at all clear to me how philosophical counselors choose which philosophy to cite. For any viewpoint in the literature, you can pretty reliably find an opposing one. In the case of the father afraid of spoiling his kid, Lou cited Aristotle to argue for an “all things in moderation” policy. But, I pointed out, he could just as easily have cited Stoic philosophers arguing that happiness lies in relinquishing desires.  So if you can pick and choose any philosophical advice you want, then aren’t you really just giving your client your own opinion about his problem, and just couching your advice in the words of a prestigious philosopher?

Hear more at Rationally Speaking Episode 48, “Philosophical Counseling.”

What do philosophers think about intuition?

Earlier this year I complained, on Rationally Speaking, about the fact that so many philosophers think it’s sufficient to back up their arguments by citing “intuition.” It’s a tricky term to pin down, but generally philosophers cite intuition when they think something is “clearly true” but can’t demonstrate it with logic or evidence. So, for example, philosophers of ethics will often claim that things are “good” or “bad” by citing their intuition. And philosophers of mind will cite their intuitions to argue that certain things would or wouldn’t be conscious (for example, David Chalmers relies on intuition to argue for the theoretical possibility of “philosophical zombies,” creatures that would act and respond exactly like conscious human beings, but which wouldn’t be conscious).

I cited many examples, not only of philosophers using intuitions as evidence, but of philosophers acknowledging that appeals to intuition are ubiquitous in the field. (“Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” wrote one philosopher.) That’s worrisome, to me, because the whole point of philosophy is allegedly to figure out whether our intuitive judgments make sense. It’s also worrisome to me because intuitions vary sharply from person to person; for example, I don’t agree at all with G. E. Moore’s argument that it is intuitively obvious that it’s “better” to have a planet full of sunsets and waterfalls than one with filth, even if no one ever gets to see that planet. (He may prefer a universe that contains Planet Waterfall to one that contains Planet Filthy, but I don’t think that makes the former objectively “better.”)

In the comment thread under his response-post, Massimo objected that intuitions are not, in fact, widespread in philosophy. “Julia, a list of cherry picked citations an argument doesn’t make,” he wrote, and he asked me if I had randomly polled philosophers. I hadn’t, of course.

But I recently came across two people who did. Kuntz & Kuntz’s “Surveying Philosophers About Philosophical Intuition,” from the March issue of the Review of Philosophy and Psychology, surveyed 282 academic philosophers and found that 51% of them thought that intuitions are “useful to justification in philosophical methods.”

Because the term “intuition” is so nebulous, the researchers also presented their survey respondents with a list of some of the more common ways of defining intuition, and asked them to rank how apt they thought the definitions were. The top two “most apt” definitions of intuition were the following:

  1. “Judgment that is not made on the basis of some kind of observable and explicit reasoning process”
  2. “An intellectual happening whereby it seems that something is the case without arising from reasoning, or sensorial perceiving, or remembering.”

The survey also shed light on one reason why Massimo, a philosopher of science, might have underestimated the prevalence of appeals to intuition in philosophy as a whole: “In regard to the usefulness of intuitions to justification, our results also revealed that philosophers of science expressed significantly lower agreement than philosophers doing metaphysics, epistemology, ethics, and philosophy of mind,” Kuntz and Kuntz wrote. That squares with my experience, too — most of the philosophy of science I’ve read has been grounded in logic, math, and evidence.

Another important side point the researchers make is there’s more than one way to use your intuitions. Philosophers certainly do use them as justification for claims, but they also use intuitions to generate claims which they then justify using more rigorous methods like logic and evidence. 83% of survey respondents agreed that intuitions are useful in that latter way, and I agree too — I have no problem with people using intuition to generate possible ideas, I just have a problem with people saying “This feels intuitively true to me, so it must be true.”

A Sleeping Beauty paradox

Imagine that one Sunday afternoon, Sleeping Beauty is taking part in a mysterious science experiment. The experimenter tells her:

“I’m going to put you to sleep tonight, and wake you up on Monday. Then, out of your sight, I’m going to flip a fair coin. If it lands Heads, I will send you home. If it lands Tails, I’ll put you back to sleep and wake you up again on Tuesday, and then send you home. But I will also, if the coin lands Tails, administer a drug to you while you’re sleeping that will erase your memory of waking up on Monday.”

So when she wakes up, she doesn’t know what day it is, but she does know that the possibilities are:

  • It’s Monday, and the coin will land either Heads or Tails.
  • It’s Tuesday, and the coin landed Tails.

We can rewrite the possibilities as:

  • Heads, Monday
  • Tails, Monday
  • Tails, Tuesday

I’d argue that since it’s a fair coin, you should place 1/2 probability on the coin being Heads and 1/2 on the coin being  Tails. So the probability on (Heads, Monday) should be 1/2. I’d also argue that since Tails means she wakes up once on Monday and once on Tuesday, and since those two wakings are indistinguishable from each other, you should split the remaining 1/2 probability evenly between (Tails, Monday) and (Tails, Tuesday). So you end up with:

  • Heads, Monday  (P = 1/2)
  • Tails, Monday (P = 1/4)
  • Tails, Tuesday  (P = 1/4)

So, is that the answer? It seems indisputable, right? Not so fast. There’s something troubling about this result. To see what it is, imagine that Beauty is told, upon waking, that it’s Monday. Given that information, what probability should she assign to the coin landing Heads? Well, if you look at the probabilities we’ve assigned to the three scenarios, you’ll see that conditional on it being Monday, Heads is twice as likely as Tails. And why is that so troubling? Because the coin hasn’t been flipped yet. How can Beauty claim that a fair coin is twice as likely to come up Heads as Tails?

Can you figure out what’s wrong with the reasoning in this post?

%d bloggers like this: