Why Decision Theory Tells You to Eat ALL the Cupcakes

cupcakeImagine that you have a big task coming up that requires an unknown amount of willpower – you might have enough willpower to finish, you might not. You’re gearing up to start when suddenly you see a delicious-looking cupcake on the table. Do you indulge in eating it? According to psychology research and decision-theory models, the answer isn’t simple.

If you resist the temptation to eat the cupcake, current research indicates that you’ve depleted your stores of willpower (psychologists call it ego depletion), which causes you to be less likely to have the willpower to finish your big task. So maybe you should save your willpower for the big task ahead and eat it!

…But if you’re convinced already, hold on a second. How easily you give in to temptation gives evidence about your underlying strength of will. After all, someone with weak willpower will find the reasons to indulge more persuasive. If you end up succumbing to the temptation, it’s evidence that you’re a person with weaker willpower, and are thus less likely to finish your big task.

How can eating the cupcake cause you to be more likely to succeed while also giving evidence that you’re more likely to fail?

Conflicting Decision Theory Models

The strangeness lies in the difference between two conflicting models of how to make decisions. Luke Muehlhauser describes them well in his Decision Theory FAQ:

This is not a “merely verbal” dispute (Chalmers 2011). Decision theorists have offered different algorithms for making a choice, and they have different outcomes. Translated into English, the [second] algorithm (evidential decision theory or EDT) says “Take actions such that you would be glad to receive the news that you had taken them.” The [first] algorithm (causal decision theory or CDT) says “Take actions which you expect to have a positive effect on the world.”

The crux of the matter is how to handle the fact that we don’t know how much underlying willpower we started with.

Causal Decision Theory asks, “How can you cause yourself to have the most willpower?”

It focuses on the fact that, in any state, spending willpower resisting the cupcake causes ego depletion. Because of that, it says our underlying amount of willpower is irrelevant to the decision. The recommendation stays the same regardless: eat the cupcake.

Evidential Decision Theory asks, “What will give evidence that you’re likely to have a lot of willpower?”

We don’t know whether we’re starting with strong or weak will, but our actions can reveal that one state or another is more likely. It’s not that we can change the past – Evidential Decision Theory doesn’t look for that causal link – but our choice indicates which possible version of the past we came from.

Yes, seeing someone undergo ego depletion would be evidence that they lost a bit of willpower.  But watching them resist the cupcake would probably be much stronger evidence that they have plenty to spare.  So you would rather “receive news” that you had resisted the cupcake.

A Third Option

Each of these models has strengths and weaknesses, and a number of thought experiments – especially the famous Newcomb’s Paradox – have sparked ongoing discussions and disagreements about what decision theory model is best.

One attempt to improve on standard models is Timeless Decision Theory, a method devised by Eliezer Yudkowsky of the Machine Intelligence Research Institute.  Alex Altair recently wrote up an overview, stating in the paper’s abstract:

When formulated using Bayesian networks, two standard decision algorithms (Evidential Decision Theory and Causal Decision Theory) can be shown to fail systematically when faced with aspects of the prisoner’s dilemma and so-called “Newcomblike” problems. We describe a new form of decision algorithm, called Timeless Decision Theory, which consistently wins on these problems.

It sounds promising, and I can’t wait to read it.

But Back to the Cupcakes

For our particular cupcake dilemma, there’s a way out:

Precommit. You need to promise – right now! – to always eat the cupcake when it’s presented to you. That way you don’t spend any willpower on resisting temptation, but your indulgence doesn’t give any evidence of a weak underlying will.

And that, ladies and gentlemen, is my new favorite excuse for why I ate all the cupcakes.

RS episode #54: The “isms” episode

In RS #54 — dubbed “The isms episode” –  Massimo and I ask, “Is the fundamental nature of the world knowable by science alone?”, looking at the issue through the lenses of a series of related philosophical positions: determinism, reductionism, physicalism, and naturalism. All of those “isms” take a stance on the question of whether there are objectively “correct” ways to interpret scientific facts — like physical laws, or causality — and if so, how do we decide what the correct interpretation is? Along the way, we debate the nature of emergent properties, whether math is discovered or invented, and whether it’s even logically possible for “supernatural” things to exist.

http://www.rationallyspeakingpodcast.org/show/rs54-the-isms-episode.html

 

RS #48: Philosophical Counseling

Can philosophy be a form of therapy? On the latest episode of Rationally Speaking, we interview Lou Marinoff, a philosopher who founded the field of “philosophical counseling,” in which people pay philosophers to help them deal with their own personal problems using philosophy. For example, one of Lou’s clients wanted advice on whether to quit her finance job to pursue a personal goal; another sought help deciding how to balance his son’s desire to go to Disneyland with his own fear of spoiling his children.

As you can hear in the interview, I’m interested but I’ve got major reservations. I certainly think that philosophy can improve how you live your life — I’ve got some great examples of that from personal experience. But I’m skeptical of Lou’s project for two related reasons: first, because I think most problems in people’s lives are best addressed by a combination of psychological science and common sense. They require a sophisticated understanding how our decision-making algorithms go wrong — for example, why we make decisions that we know are bad for us, how we end up with distorted views of our situations and of our own strengths and weaknesses, and so on. Those are empirical questions, and philosophy’s not an empirical field, so relying on philosophy to solve people’s problems is going to miss a large part of the picture.

The other problem is that it wasn’t at all clear to me how philosophical counselors choose which philosophy to cite. For any viewpoint in the literature, you can pretty reliably find an opposing one. In the case of the father afraid of spoiling his kid, Lou cited Aristotle to argue for an “all things in moderation” policy. But, I pointed out, he could just as easily have cited Stoic philosophers arguing that happiness lies in relinquishing desires.  So if you can pick and choose any philosophical advice you want, then aren’t you really just giving your client your own opinion about his problem, and just couching your advice in the words of a prestigious philosopher?

Hear more at Rationally Speaking Episode 48, “Philosophical Counseling.”

What do philosophers think about intuition?

Earlier this year I complained, on Rationally Speaking, about the fact that so many philosophers think it’s sufficient to back up their arguments by citing “intuition.” It’s a tricky term to pin down, but generally philosophers cite intuition when they think something is “clearly true” but can’t demonstrate it with logic or evidence. So, for example, philosophers of ethics will often claim that things are “good” or “bad” by citing their intuition. And philosophers of mind will cite their intuitions to argue that certain things would or wouldn’t be conscious (for example, David Chalmers relies on intuition to argue for the theoretical possibility of “philosophical zombies,” creatures that would act and respond exactly like conscious human beings, but which wouldn’t be conscious).

I cited many examples, not only of philosophers using intuitions as evidence, but of philosophers acknowledging that appeals to intuition are ubiquitous in the field. (“Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” wrote one philosopher.) That’s worrisome, to me, because the whole point of philosophy is allegedly to figure out whether our intuitive judgments make sense. It’s also worrisome to me because intuitions vary sharply from person to person; for example, I don’t agree at all with G. E. Moore’s argument that it is intuitively obvious that it’s “better” to have a planet full of sunsets and waterfalls than one with filth, even if no one ever gets to see that planet. (He may prefer a universe that contains Planet Waterfall to one that contains Planet Filthy, but I don’t think that makes the former objectively “better.”)

In the comment thread under his response-post, Massimo objected that intuitions are not, in fact, widespread in philosophy. “Julia, a list of cherry picked citations an argument doesn’t make,” he wrote, and he asked me if I had randomly polled philosophers. I hadn’t, of course.

But I recently came across two people who did. Kuntz & Kuntz’s “Surveying Philosophers About Philosophical Intuition,” from the March issue of the Review of Philosophy and Psychology, surveyed 282 academic philosophers and found that 51% of them thought that intuitions are “useful to justification in philosophical methods.”

Because the term “intuition” is so nebulous, the researchers also presented their survey respondents with a list of some of the more common ways of defining intuition, and asked them to rank how apt they thought the definitions were. The top two “most apt” definitions of intuition were the following:

  1. “Judgment that is not made on the basis of some kind of observable and explicit reasoning process”
  2. “An intellectual happening whereby it seems that something is the case without arising from reasoning, or sensorial perceiving, or remembering.”

The survey also shed light on one reason why Massimo, a philosopher of science, might have underestimated the prevalence of appeals to intuition in philosophy as a whole: “In regard to the usefulness of intuitions to justification, our results also revealed that philosophers of science expressed significantly lower agreement than philosophers doing metaphysics, epistemology, ethics, and philosophy of mind,” Kuntz and Kuntz wrote. That squares with my experience, too — most of the philosophy of science I’ve read has been grounded in logic, math, and evidence.

Another important side point the researchers make is there’s more than one way to use your intuitions. Philosophers certainly do use them as justification for claims, but they also use intuitions to generate claims which they then justify using more rigorous methods like logic and evidence. 83% of survey respondents agreed that intuitions are useful in that latter way, and I agree too — I have no problem with people using intuition to generate possible ideas, I just have a problem with people saying “This feels intuitively true to me, so it must be true.”

A Sleeping Beauty paradox

Imagine that one Sunday afternoon, Sleeping Beauty is taking part in a mysterious science experiment. The experimenter tells her:

“I’m going to put you to sleep tonight, and wake you up on Monday. Then, out of your sight, I’m going to flip a fair coin. If it lands Heads, I will send you home. If it lands Tails, I’ll put you back to sleep and wake you up again on Tuesday, and then send you home. But I will also, if the coin lands Tails, administer a drug to you while you’re sleeping that will erase your memory of waking up on Monday.”

So when she wakes up, she doesn’t know what day it is, but she does know that the possibilities are:

  • It’s Monday, and the coin will land either Heads or Tails.
  • It’s Tuesday, and the coin landed Tails.

We can rewrite the possibilities as:

  • Heads, Monday
  • Tails, Monday
  • Tails, Tuesday

I’d argue that since it’s a fair coin, you should place 1/2 probability on the coin being Heads and 1/2 on the coin being  Tails. So the probability on (Heads, Monday) should be 1/2. I’d also argue that since Tails means she wakes up once on Monday and once on Tuesday, and since those two wakings are indistinguishable from each other, you should split the remaining 1/2 probability evenly between (Tails, Monday) and (Tails, Tuesday). So you end up with:

  • Heads, Monday  (P = 1/2)
  • Tails, Monday (P = 1/4)
  • Tails, Tuesday  (P = 1/4)

So, is that the answer? It seems indisputable, right? Not so fast. There’s something troubling about this result. To see what it is, imagine that Beauty is told, upon waking, that it’s Monday. Given that information, what probability should she assign to the coin landing Heads? Well, if you look at the probabilities we’ve assigned to the three scenarios, you’ll see that conditional on it being Monday, Heads is twice as likely as Tails. And why is that so troubling? Because the coin hasn’t been flipped yet. How can Beauty claim that a fair coin is twice as likely to come up Heads as Tails?

Can you figure out what’s wrong with the reasoning in this post?

Spinoza, Godel, and Theories of Everything

On the latest episode of Rationally Speaking, Massimo and I have an entertaining discussion with Rebecca Goldstein, philosopher, author, and recipient of the prestigious MacArthur “genius” grant. There’s a pleasing symmetry to her published oeuvre. Her nonfiction books, about people like philospher Baruch Spinoza and mathematician Kurt Godel, have the aesthetic sensibilities of novels, while her novels (most recently, “36 Arguments for the Existence of God: A Work of Fiction”) have the kind of weighty philosophical discussions one typically finds in non-fiction.

It’s a wide-ranging and fun conversation. My main complaint is just over her treatment of Spinoza. Basically, people say he “believed God was nature.” That always made me roll my eyes, because it’s not making a claim about the world, it’s merely redefining the word “God” to mean “nature,” for no good reason. I voice this complaint to Rebecca during the show and she defends Spinoza; you can see what you think of her response, but I felt it to be weak; it sounded like she was just pointing out some dubious similarities between nature and the typical conception of God.

Nevertheless! It’s certainly worth a listen:

http://www.rationallyspeakingpodcast.org/show/rs45-rebecca-newberger-goldstein-on-spinoza-goedl-and-theori.html

Reflections on pain, from the burn unit

Deep frying: even more hazardous to your health than I realized.

Yesterday marked the end of my 18-day stay in New York Presbyterian Hospital’s burn unit, where I landed after accidentally overturning a pot of hot cooking oil onto myself. I ended up with second- and third- degree burns over much of my legs, but after skin graft surgery and some physical therapy, I can walk again, albeit unsteadily, and I have skin on my legs again, albeit ugly skin.

I learned a lot during my hospital stay. Unfortunately, nearly all of that hard-earned knowledge was in very specific topics – the ideal cocktail of pills, the least-uncomfortable position to sleep in, etc. – which will neither be applicable in other contexts nor interesting to other people. But I did leave with one realization about pain, and how we experience it.

I wasn’t in constant pain for the entire 18 days, by any means, but every day featured at least a few painful experiences, from the minor (frequent shots) to the major (scraping the dead skin off the burns). I tried a handful of methods to deal with it. Deep breathing helped a bit, as did pulling my own hair. One friend suggested I try imagining myself existing at a point halfway across the room; that helped a little, but only because our philosophical argument over whether it was even possible to pull off such a mental stunt briefly distracted me from my throbbing legs.

But the one thing that did seem to dramatically affect my pain level was my belief about what was causing the pain. At one point, I was lying on my side and a nurse was pulling a bandage off of one of my burns; I couldn’t see what she was doing, but it felt like the bandage was sticking to the wound, and it was agonizing. But then she said: “Now, keep in mind, I’m just taking off the edges of the bandage here, so this is all normal skin. It just hurts because it’s like pulling tape off your skin.” And once she said that — once I started picturing tape being pulled off of normal, intact skin rather than an open wound — the pain didn’t bother me nearly as much. It really drove home to me how much of my experience of pain is psychological; if I believe the cause of the pain is something frightening or upsetting, then the pain seems much worse.

And in fact, I’d had a similar thought a few months ago, which I’d then forgotten about until the burn experience called it back to mind. I’d been carrying a heavy shopping bag on my shoulder one day, and the weight of the bag’s straps was cutting into the skin on my shoulder. But I barely noticed it. And then it occurred to me that if I had been experiencing that exact same sensation on my shoulder, in the absence of a shopping bag, it would have seemed quite painful. The fact that I knew the sensation was caused by something mundane and harmless reduced the pain so much it didn’t even register in my mind as a negative experience.

Of course, I probably can’t successfully lie to myself about what’s causing me pain, so there’s a limit to how directly useful this observation can be for managing pain in the future. But it was indirectly useful for me, because it proved to me something I’d heard but never quite believed: that the unpleasantness of pain is substantially (entirely?) psychologically constructed. A bit of subsequent reading led me to some fascinating science that underlines that conclusion – for example, the fact that the physical sensation of pain is processed by one region of the brain while the unpleasantness of that sensation is processed by another region. And the existence of a condition called pain asymbolia, in which people with certain kinds of brain damage say they’re able to feel pain but that they don’t find it the slightest bit unpleasant.

The relationship between pain and unpleasantness is a philosophically interesting one, in fact. Unpleasantness is usually considered to be built into the very definition of pain, so it’s quite confusing to talk about experiencing different levels of unpleasantness from the same level of pain. And it’s even more confusing to talk about experiencing no unpleasantness from pain, as people with pain asymbolia do. The idea feels almost as incoherent as that of being happy but not enjoying it, or doubling a number without making it any bigger.

But observing my own experiences of pain a bit more closely has made it a little easier for me to wrap my mind around the idea. I really did feel, when the nurse informed me that she was pulling the bandage off of intact skin rather than burned skin, like the pain was the same but the unpleasantness was lessened. It’s harder to imagine pain with no unpleasantness, but perhaps my shopping bag example sheds a little light: I felt the sensation of something cutting into my shoulder, but it didn’t bother me. So maybe someone with pain asymbolia would experience a cutting sensation as if they’re just carrying a heavy shopping bag, with no “Warning!” and “This is awful!” alarms going off in their mind.

I’ll have to think more about the relationship between pain and the experience of pain, because it’s still confusing to me, but at least I can feel like I got some new philosophical food for thought out of my 18 days at NY Presbyterian. Not to mention the very practical, un-philosophical lesson: don’t leave your giant pots of oil near the edge of the stove.

(ETA: I completely forgot, while writing this, that Jesse had touched on this very subject last month! Wow, Jesse — in retrospect, that’s an eerily prescient post.)

Follow

Get every new post delivered to your Inbox.

Join 513 other followers

%d bloggers like this: