You’re such an essentialist!

My latest video blog is about essentialism, and why it’s damaging to your rationality — and your happiness.

My Little Pony: Reality is Magic!

(Cross-posted at 3 Quarks Daily)

You probably won’t be very surprised to hear that someone decided to reboot the classic 80’s My Little Pony cartoon based on a line of popular pony toys. After all, sequels and shout-outs to familiar brands have become the foundation of the entertainment industry. The new ‘n improved cartoon, called My Little Pony: Friendship is Magic, follows a nerdy intellectual pony named Twilight Sparkle, who learns about the magic of friendship through her adventures with the other ponies in Ponyville.

But you might be surprised to learn that My Little Pony: Friendship is Magic’s biggest accolades have come not from its target audience of little girls and their families, but from a fervent adult fanbase. I first heard of My Little Pony: Friendship is Magic from one of my favorite sources of intelligent pop culture criticism, The Onion’s AV Club, which gave the show an enthusiastic review last year. (I had my suspicions at first that the AV Club’s enthusiasm was meant to be ironic, but they insisted that the show wore down their defenses, and that it was “legitimately entertaining and lots of fun.” So either their appreciation of My Little Pony: Friendship is Magic is genuine, or irony has gotten way more poker-faced than I realized.)

And you might be even more taken aback to learn that many, if not most, of those adult My Little Pony: Friendship is Magic fans are men and that they’ve even coined a name for themselves: “Bronies.” At least, I was taken aback. In fact, my curiosity was sufficiently piqued that I contacted Purple Tinker, the person in charge of organizing the bronies’ seasonal convention in New York City. Purple Tinker was friendly and helpful, saying that he had read about my work in the skeptic/rationalist communities, and commended me as only a brony could: “Bravo – that’s very Twilight Sparkle of you!”

But when I finally sat down and watched the show, I realized that while Purple Tinker may be skeptic-friendly, the show he loves is not. The episode I watched, “Feeling Pinkie Keen,” centers on a pony named Pinkie Pie, who interprets the twitches in her tail and the itches on her flank as omens of some impending catastrophe, big or small. “Something’s going to fall!” Pinkie Pie shrieks, a few beats before Twilight Sparkle accidentally stumbles into a ditch. The other ponies accept her premonitions unquestioningly, but empirically-minded Twilight Sparkle is certain that Pinkie Pie’s successes are either a hoax or a coincidence. She’s detemined to get to the bottom of the matter, shadowing Pinkie Pie in secret to observe whether the premonitions disappear when there’s no appreciative audience around, and hooking Pinkie Pie up to what appears to be a makeshift MRI machine which Twilight Sparkle apparently has lying around her house, to see whether the premonitions are accompanied by any unusual brain activity.

Meanwhile, Twilight Sparkle is being more than a little snotty about how sure she is that she’s right, and how she just can’t wait to see the look on Pinkie Pie’s face when Pinkie Pie gets proven wrong. Which, of course, is intended to make it all the more enjoyable to the audience when — spoiler alert! — Twilight Sparkle’s investigations yield no answers, and Pinkie Pie’s premonitions just keep coming true. Finally, Twilight Sparkle admits defeat: “I’ve learned that there are some things you just can’t explain. But that doesn’t mean they’re not true. You just have to choose to believe.”

Nooo, Twilight Sparkle, no! You are a disgrace to empirical ponies everywhere. And I’m not saying that because Twilight Sparkle “gave in” and concluded that Pinkie Pie’s premonitions were real. After all, sometimes it is reasonable to conclude that an amazing new phenomenon is more likely to be real than a hoax, or a coincidence, or an exaggeration, etc. It depends on the strength of the evidence. Rather, I’m objecting to the fact that Twilight Sparkle seems to think that because she was unable to figure out how premonitions worked, that therefore science has failed.

Twilight Sparkle is an example of a Straw Vulcan, a character who supposedly represents the height of rationality and logic, but who ends up looking like a fool compared to other, less rational characters. That’s because the Straw Vulcan brand of rationality isn’t real rationality. It’s a gimpy caricature, crafted that way either because the writers want to make rationality look bad, or because they genuinely think that’s what rationality looks like. In a talk I gave at this year’s Skepticon IV conference, I described some characteristic traits of a Straw Vulcan, such as an inability to enjoy life or feel emotions, and an unwillingness to make any decisions without all the information. Now I can add another trait to my list, thanks to Twilight Sparkle: the attitude that if we can’t figure out the explanation, then there isn’t one.

Do you think it’s possible that anyone missed the anti-inquiry message?  Hard to imagine, given the fact that the skeptical pony seems mainly motivated by a desire to prove other people wrong and gloat in their faces, and given her newly-humbled admission that “sometimes you have to just choose to believe.” But just in case there was anyone in the audience who didn’t get it yet, the writers also included a scene in which Twilight Sparkle is only able to escape from a monster by jumping across a chasm – and she’s scared, but the other ponies urge her on by crying out, “Twilight Sparkle, take a leap of faith!”

And yes, of course, My Little Pony: Friendship is Magic is “just” a kids’ cartoon, and I can understand why people might be tempted to roll their eyes at me for taking its message seriously. I don’t know to what extent children internalize the messages of the movies, TV, books, and other media they consume. But I do know that there are plenty of messages that we, as a society, would rightfully object to if we found them in a kids’ cartoon – imagine if one of the ponies played dumb to win the favors of a boy-pony and then they both lived happily ever after. Or if an episode ended with Twilight Sparkle chirping, “I’ve learned you should always do whatever it takes to impress the cool ponies!” So why aren’t we just as intolerant of a show that tells kids: “You can either be an obnoxious skeptic, or you can stop asking questions and just have faith”?

How rationality can make your life more awesome

(Cross-posted at Rationally Speaking)

Sheer intellectual curiosity was what first drew me to rationality (by which I mean, essentially, the study of how to view the world as accurately as possible). I still enjoy rationality as an end in itself, but it didn’t take me long to realize that it’s also a powerful tool for achieving pretty much anything else you care about. Below, a survey of some of the ways that rationality can make your life more awesome:

Rationality alerts you when you have a false belief that’s making you worse off.

You’ve undoubtedly got beliefs about yourself – about what kind of job would be fulfilling for you, for example, or about what kind of person would be a good match for you. You’ve also got beliefs about the world – say, about what it’s like to be rich, or about “what men want” or “what women want.” And you’ve probably internalized some fundamental maxims, such as: When it’s true love, you’ll know. You should always follow your dreams. Natural things are better. Promiscuity reduces your worth as a person.

Those beliefs shape your decisions about your career, what to do when you’re sick, what kind of people you decide to pursue romantically and how you pursue them, how much effort you should be putting into making yourself richer, or more attractive, or more skilled (and skilled in what?), more accommodating, more aggressive, and so on.

But where did these beliefs come from? The startling truth is that many of our beliefs became lodged in our psyches rather haphazardly. We’ve read them, or heard them, or picked them up from books or TV or movies, or perhaps we generalized from one or two real-life examples.

Rationality trains you to notice your beliefs, many of which you may not even be consciously aware of, and ask yourself: where did those beliefs come from, and do I have good reason to believe they’re accurate? How would I know if they’re false? Have I considered any other, alternative hypotheses?

Rationality helps you get the information you need.

Sometimes you need to figure out the answer to a question in order to make an important decision about, say, your health, or your career, or the causes that matter to you. Studying rationality reveals that some ways of investigating those questions are much more likely to yield the truth than others. Just a few examples:

“How should I run my business?” If you’re looking to launch or manage a company, you’ll have a huge leg up over your competition if you’re able to rationally determine how well your product works, or whether it meets a need, or what marketing strategies are effective.

“What career should I go into?” Before committing yourself to a career path, you’ll probably want to learn about the experiences of people working in that field. But a rationalist also knows to ask herself, “Is my sample biased?” If you’re focused on a few famous success stories from the field, that doesn’t tell you very much about what a typical job is like, or what your odds are of making it in that field.

It’s also an unfortunate truth that not every field uses reliable methods, and so not every field produces true or useful work. If that matters to you, you’ll need the tools of rationality to evaluate the fields you’re considering working in. Fields whose methods are controversial include psychotherapy, nutrition science, economics, sociology, consulting, string theory, and alternative medicine.

“How can I help the world?” Many people invest huge amounts of money, time, and effort in causes they care about. But if you want to ensure that your investment makes a difference, you need to be able to evaluate the relevant evidence. How serious of a problem is, say, climate change, or animal welfare, or globalization? How effective is lobbying, or marching, or boycotting? How far do your contributions go at charity X versus charity Y?

Rationality shows you how to evaluate advice.

Learning about rationality, and how widespread irrationality is, sparks an important realization: You can’t assume other people have good reasons for the things they believe. And that means you need to know how to evaluate other people’s opinions, not just based on how plausible their opinions seem, but based on the reliability of the methods they used to form those opinions.

So when you get business advice, you need to ask yourself: What evidence does she have for that advice, and are her circumstances relevant enough to mine? The same is true when a friend swears by some particular remedy for acne, or migraines, or cancer. Is he repeating a recommendation made by multiple doctors? Or did he try it once and get better? What kind of evidence is reliable?

In many cases, people can’t articulate exactly how they’ve arrived at a particular belief; it’s just the product of various experiences they’ve had and things they’ve heard or read. But once you’ve studied rationality, you’ll recognize the signs of people who are more likely to have accurate beliefs: People who adjust their level of confidence to the evidence for a claim; people who actually change their minds when presented with new evidence; people who seem interested in getting the right answer rather than in defending their own egos.

Rationality saves you from bad decisions.

Knowing about the heuristics your brain uses and how they can go wrong means you can escape some very common, and often very serious, decision-making traps.

For example, people often stick with their original career path or business plan for years after the evidence has made clear that it was a mistake, because they don’t want their previous investment to be wasted. That’s thanks to the sunk cost fallacy. Relatedly, people often allow cognitive dissonance to convince them that things aren’t so bad, because the prospect of changing course is too upsetting.

And in many major life decisions, such as choosing a career, people envision one way things could play out (“I’m going to run my own lab, and live in a big city…”) – but they don’t spend much time thinking about how probable that outcome is, or what the other probable outcomes are. The narrative fallacy is that situations imagined in high detail seem more plausible, regardless of how probable they actually are.

Rationality trains you to step back from your emotions so that they don’t cloud your judgment.

Depression, anxiety, anger, envy, and other unpleasant and self-destructive emotions tend to be fueled by what cognitive therapy calls “cognitive distortions,” irrationalities in your thinking such as jumping to conclusions based on limited evidence; focusing selectively on negatives; all-or-nothing thinking; and blaming yourself, or someone else, without reason.

Rationality breaks your habit of automatically trusting your instinctive, emotional judgments, encouraging you instead to notice the beliefs underlying your emotions and ask yourself whether those beliefs are justified.

It also trains you to notice when your beliefs about the world are being colored by what you want, or don’t want, to be true. Beliefs about your own abilities, about the motives of other people, about the likely consequences of your behavior, about what happens after you die, can be emotionally fraught. But a solid training in rationality keeps you from flinching away from the truth – about your situation, or yourself — when learning the truth can help you change it.

The Straw Vulcan: Hollywood’s illogical approach to logical decisionmaking

I gave a talk at Skepticon IV last weekend about Vulcans and why they’re a terrible example of rationality. I go through five principles of Straw Vulcan Rationality(TM), give examples from Star Trek and from real life, and explain why they’re mistaken:

  1. Being rational means expecting everyone else to be rational too.
  2. Being rational means you should never make a decision until you have all the information.
  3. Being rational means never relying on intuition.
  4. Being rational means eschewing emotion.
  5. Being rational means valuing only quantifiable things — like money, productivity, or efficiency.

In retrospect, I would’ve streamlined the presentation more, but I’m happy with the content —  I think it’s an important and under-appreciated topic. The main downside was just that everyone wanted to talk to me afterwards, not about rationality, but about Star Trek. I don’t know the answer to your obscure trivia questions, Trekkies!

 

UPDATE: I’m adding my diagrams of the Straw Vulcan model of ideal decisionmaking, and my proposed revisions to it, since those slides don’t appear in the video:

The Straw Vulcan view of the relationship between rationality and emotion.

After my revisions.

How Should Rationalists Approach Death?

“How Should Rationalists Approach Death?” That’s the title of the panel I’m moderating this weekend at Skepticon, and I couldn’t be more excited. It’s a big topic – we won’t figure it all out in an hour, but I know we’ll get people to think. Do common beliefs about death make sense? How can we find comfort about our mortality? Should we try to find comfort about death? What should society be doing about death?

I managed to get 4 fantastic panelists, all of whom I respect and admire:

  • Greta Christina is author, blogger, speaker extraordinaire. Her writing has appeared in multiple magazines and newspapers, including Ms., Penthouse, Chicago Sun-Times, On Our Backs, and Skeptical Inquirer. I’ve been thrilled to see her becoming a well-known and respected voice in the secular community. She delivered the keynote address at the Secular Student Alliance’s 2010 Conference, and has been on speaking tours around the country.
  • James Croft is a candidate for an Ed.D at Harvard and works with the Humanist Chaplaincy at Harvard. I had the pleasure of meeting James two years ago at American Humanist Association conference, where we talked and argued for hours. Eloquent, gracious, and sharp, he’s a great model of intellectual engagement. He’s able to disagree agreeably, but also change his mind when the occasion calls for it.
  • Eliezer Yudkowsky co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI), where he works as a full-time Research Fellow. He’s written must-read essays on Bayes’ Theorem and human rationality as well as great works of fiction. Have you heard me rave about Harry Potter and the Methods of Rationality? That’s him. His writings, especially on the community blog LessWrong, have influenced my thinking quite a bit.
  • And some lady named Julia Galef, who apparently writes a pretty cool blog with her brother, Jesse.

To give you a taste of what to expect, I chose two passages about finding hope in death – one from Greta, the other from Eliezer.

Greta:

But we can find ways to frame reality — including the reality of death — that make it easier to deal with. We can find ways to frame reality that do not ignore or deny it and that still give us comfort and solace, meaning and hope. And we can offer these ways of framing reality to people who are considering atheism but have been taught to see it as inevitably frightening, empty, and hopeless.

And I’m genuinely puzzled by atheists who are trying to undercut that.

Eliezer:

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

If you’re coming to Skepticon – and you should, it’s free! – you need to be there for this panel.

The Penrose Triangle of Beliefs

For a long time, I didn’t think it was truly possible to believe contradictory things at the same time. In retrospect, the model I was using of belief-formation was roughly: when people decide that they believe some new claim, it’s because they’ve compared it to their pre-existing beliefs and found no contradictions. Of course, that may be how an ideal-reasoning Artificial Intelligence would build up its set of beliefs*, but we’re not ideal reasoners. It’s not at all difficult to find contradictions in most anyone’s belief set. Just for example,

  1. “The Bible is the word of God.”
  2. “The Bible says you’ll go to hell if you don’t accept Jesus.”
  3. “My atheist friends aren’t going to hell, they’re good people.”

Or – to take an example I’ve witnessed many times:

  1. “The reason it’s not okay to have sex with animals is because they can’t consent to it.”
  2. “Animals can’t consent to being killed and eaten.”
  3. “It’s fine to kill and eat animals.”

Anyway, despite the fact that I had overwhelming evidence demonstrating that, yes, people are quite capable of believing contradictory things, I was having a hard time understanding how they did it. Until I took another look at an old optical illusion called the Penrose Triangle.

The Penrose Triangle

If you look at the whole triangle at once, you can’t see it in enough detail to notice its impossibility. All you can do, at that zoomed-out level, is get a sense of whether it looks, roughly, like a plausible object. And it does.

Alternatively, you can look closely at one part of the picture at a time. Then you can actually check the details of the picture to make sure they make sense, rather than relying on the vague “feels plausible” kind of examination you did at the zoomed-out holistic level. But the catch is that in order to scrutinize the picture in detail, you have to zoom in to one subset of the picture at a time – and each corner of the triangle, on its own, is perfectly consistent.

And I think that the Penrose triangle is an apt visual metaphor for what contradictory beliefs must look like in our heads. We don’t notice the contradictions in our beliefs because we either “examine” our beliefs at the zoomed-out level (e.g., asking ourselves, “Do my beliefs about God make sense?” and concluding “Yes” because no obvious contradictions jump out at us in response to that query)… or we examine our beliefs in detail, but only a couple at a time. And so we never notice that our beliefs form an impossible object.

*(Well, to be more precise, you’d probably want an ideal-reasoning AI to assign degrees of belief, or credence, to claims, rather than a binary “believe”/”disbelieve.” So then the way to avoid contradictions would be to prohibit your AI from assigning credence in a way that violated the laws of probability. So, for example, it would never assign 99% probability to A being true and 99% probability to B being true, but only 5% probability to “A and B being true.”)

Calibrating our Confidence


It’s one thing to know how confident we are in our beliefs, it’s another to know how confident we should be. Sure, the de Finetti’s Game thought experiment gives us a way to put a number on our confidence – quantifying how likely we feel we are to be right. But we still need to learn to calibrate that sense of confidence with the results.  Are we appropriately confident?

Taken at face value, if we express 90% confidence 100 times, we expect to be proven wrong an average of 10 times. But very few people take the time to see whether that’s the case. We can’t trust our memories on this, as we’re probably more likely to remember our accurate predictions and forget all the offhand predictions that fell flat. If want to get an accurate sense of how well we’ve calibrated our confidence, we need a better way to track it.

Well, here’s a way: PredictionBook.com. While working on my last post, I stumbled on this nifty project. Its homepage features the words “How Sure Are You?” and “Find out just how sure you should be, and get better at being only as sure as the facts justify.” Sounds perfect, right?

It allows you to enter your prediction, how confident you are, and when the answer will be known.  When the time comes, you record whether or not you were right and it tracks your aggregate stats.  Your predictions can be private or public – if they’re public, other people can weigh in with their own confidence levels and see how accurate you’ve been.

(This site isn’t new to rationalists: Eliezer and the LessWrong community noticed it a couple years ago, and LessWrong’er Gwern has been using it to – among other things – track inTrade predictions.)

Since I don’t know who’s using the site and how, I don’t know how seriously to take the following numbers. So take this chart with a heaping dose of salt. But I’m not surprised that the confidences entered are higher than the likelihood of being right:

Predicted Certainty 50% 60% 70% 80% 90% 100% Total
Actual Certainty 37% 52% 58% 70% 79% 81%
Sample Size 350 544 561 558 709 219 2941

Sometimes the miscalibration matters more than others. In Mistakes Were Made (but not by me), Tavris and Aronson describe the overconfidence police interrogators feel about their ability to discern honest denials from false ones. In one study, researchers selected videos of police officers interviewing suspects who were denying a crime – some innocent and some guilty.

Kassin and Fong asked forty-four professional detectives in Florida and Ontario, Canada, to watch the tapes. These professionals averaged nearly fourteen years of experience each, and two-thirds had ha special training, many in the Reid Technique. Like the students [in a similar study], they did no better than chance, yet they were convinced that their accuracy rate was close to 100 percent. Their experience and training did not improve their performance. Their experience and training simply increased their belief that it did.

As a result, more people are falsely imprisoned as prosecutors steadfastly pursue convictions for people they’re sure are guilty. This is a case in which poor calibration does real harm.

Of course, it’s often a more benign issue. Since finding PredictionBook, I see everything as a prediction to be measured. A coworker and I were just discussing plans to have a group dinner, and had the following conversation (almost word for word):

Her: How to you feel about squash?”
Me: “I’m uncertain about squash…”
Her: “What about sauteed in butter and garlic?”
Me: “That has potential. My estimation of liking it just went up slightly.”
*Runs off to enter prediction*

I’ve already started making predictions in hopes that tracking my calibration errors will help me correct them. I wish Prediction Book had tags – it would be fascinating (and helpful!) to know that I’m particularly prone to misjudge whether I’ll like foods or that I’m especially well-calibrated at predicting the winner of sports games.

And yes, I will be using PredictionBook on football this season. Every week I’ll try to predict the winners and losers, and see whether my confidence is well-placed. Honestly, I expect to see some homer-bias and have too much confidence in the Ravens.  Isn’t exposing irrationality fun?

%d bloggers like this: