There’s another neat way of doing this using relative primes: https://en.wikipedia.org/wiki/Coprime_integers#Probability_of_coprimality

I’m getting:

Mean: 3.14192111803

95% Confidence Interval: (3.14344, 3.14412)

95 Confidence Interval Size: .00068

–––––

Code:

from fractions import gcd

import random

total = 0

count = 0

results = []

for trial in range(100):

for i in range(10000):

n1 = int(random.random()*100000)

n2 = int(random.random()*100000)

if gcd(n1, n2) == 1:

count += 1

total += 1

results.append((6/((count+0.0)/total))**.5)

import numpy as np

import scipy.stats as st

print results[0]

print st.t.interval(0.95, len(results)-1, loc=np.mean(results), scale=st.sem(results))

you can get 8-core cpu’s pretty cheap these days,which gets you to around 1.9B per second, or

112B per minute.

Or faster if you’re willing to vectorize it for AVX2 …

]]>I don't understand. Why does the p-value have anything to do with the magnitude of the effect?

]]>answer = (pA + pD)*(1/3) + pB*(1/3) + pC*(1/3) = 1/3 since the probabilities must sum to one, which I believe would be Ike’s answer. Holds regardless of probabilities to answer Thomas question.

Note that one could relax the assumption of equal priors to get

answer = (pA + pD)*(Prior(25)) + pB*(Prior(50)) + pC*(1- Prior(25) – Prior(50))

This formula is a legitimate answer, essentially saying the answer depends in an exact way on information not given in the problem.

]]>