# How has Bayes’ Rule changed the way I think?

June 4, 2013 3 Comments

People talk about how Bayes’ Rule is so central to rationality, and I agree. But given that I don’t go around plugging numbers into the equation in my daily life, how does Bayes actually affect my thinking?

A short answer, in my new video below:

(This is basically what the title of this blog was meant to convey — quantifying your uncertainty.)

Great post. Along with QBism this seems to point to the best explainition of the relationship between our beliefs, math and the world. Rather than being Platonic, math/logic are the best description of our belief’s relationship to the world. Due to that best possible relationship to the brute facts of existence math and logic seem to be external forms. But just as Qbsim reduces the magical role of the observer to the mundane relationship of her views to the world, so too Bayes’ Rule insists that frequentists are claiming an ontological accuracy for probability that only in certain cases accurately maps reality 1 to 1. Even then they are just descriptions of our beliefs; they just happen to be really well founded beliefs.

But even Bayes can get off track. He tried to use his rule for apologetics and Bostrom is happy to do the same with his Simulation Argument. I think the problem arises when you force an irrational prior into the deck by arbitrarily assigning it a probability. There are certain priors like God and pseudo Gods that cannot be reevaluted. Can Baye’s rule be applied to the Liars Paradox?

“Would this evidence be likelier if my prior belief was true – or if something else was true?”

The question suggested by ClearerThinking.org: “How much more likely am I to encounter this evidence if my theory is true, rather than false?”

These two questions would have the questioner question questions differently.

Which is the proper way to interpret new evidence?

Your question is getting at the difference between frequentist and Bayesian statistics. The two questions that you quote are asking for a maximum likelihood estimate (MLE), where you choose the belief corresponding to the highest likelihood of the observation. In her example, that would mean comparing P(accident | good driver) vs P(accident | bad driver), and (if trying to make a binary decision) choosing whether or not she’s a good driver based on which is larger (which would of course be ‘bad driver’). Note that this in non way assumes or uses any prior probabilities/beliefs on whether or not she is a good driver.

This is in contrast to the reasoning and decision making she’s referring to, whereby you also have prior beliefs on P(good driver) and P(bad driver), and can then uses Bayes’ Rule to combine these with the likelihoods from above, to then compare the a posteriori probabilities, P(good driver | accident) vs P(bad driver | accident). Choosing based on which of these is larger is the maximum a posteriori (MAP) rule.

Note that if the space of priors is discrete, then the MLE and the MAP are the same if the prior probabilities are uniform, which in this case would mean if P(good driver)=P(bad driver)=0.5

I won’t rehash the whole 200 year-old frequentist vs Bayesian debate here, except to clarify that there’s nothing controversial about Bayes’ Rule itself, so it’s certainly the thing to use if you have all the components, the controversy surrounds when you should use it if you have to make up the priors somewhat arbitrarily.