21 September 2013

De Méré's Problem

Often cited as the problem that began the theory of probability is the so called chevalier de Méré's problem. Chevalier de Méré was the nickname of Antoine Gombaud, a 17th century French writer. He would be quite upset to know that textbooks today often call him a gambler, because not only that's false, but he disliked gamblers in general. In his letter to his friend Mitton, he writes: "But I confess also that on my part I deplore you who are confined to gambling, longing for nothing but luck, without eyes for anything but this artificial world".

De Méré was a sociable man with some significant presence in the royal court. In fact, he was so sociable that he could befriend a nerd like Pascal. This friendship turned out to be a beneficial one both for them and the rest of the world. De Méré brought Pascal's attention to a couple of interesting probability problems and Pascal proceeded to set up the modern probability theory by solving these problems together with Fermat. De Méré still gets quite a bit of credit for his contribution in asking the right questions (life strategy: befriend smart people and ask them hard questions) and have those problems named after him.

There are two problems called de Méré's problems, but then neither were truly original problems that de Méré came with. In fact, one of the problems is more commonly known as the problem of points and had been around at least couple hundred years by then. The other problem also likely predates de Méré but there is no other good name for it and we exclusively call it de Méré's problem.

De Méré's problem goes like this: how many times do you have to throw a pair of dice so that the chance of throwing a pair of six at least once is greater than one-half?

The problem isn't too difficult to solve for anyone who studied some probability (showing how much we have progressed since then). The chance of throwing a double six at least once is one minus the chance of never throwing a double six. The chance of latter is 35/36, so given the dice are thrown n times, the chance of having at least one double six is 1 - (35/36)n. At n=24, this probability is about 0.4914 and at n=25, it is about 0.5055. So the answer is 25 times.

So why did this problem confuse de Méré? My lecture note says he falsely thought that the chance of getting at least once six in four throws is (1/6) times 4 and consequently computed the probability wrong. Well, it turns out that he wasn't so stupid. He in fact claims that the chance of rolling at least one six is greater than half when a die is rolled four times and correctly gives the odd as 671 to 625. What really troubled de Méré was that this result doesn't add up with the so called Old Gambler's Rule.

Critical value is the least number of trials you need to have least one success at least half the times (essentially the median of geometric random variable). So we can rephrase the problem as finding the critical value of rolling a double six. Suppose an experiment has one in N chance of success and has critical value C, and another experiment has one in M chance of success and has critical value D. Then Old Gambler's Rule states that the ratio of N to M is equal to the ratio of C to D.

According to this rule, since rolling at least one six (which has chance of one in 6) has critical value of 4, rolling at least one double six (which has chance of one in 36) should have the critical value of 24. Of course, as we have shown, this is not true, and de Méré keenly noticed this (supposedly through experience).

So the real problem is not so much about finding the critical value of rolling a double six but finding what's wrong with Old Gambler's Rule. To investigate, let's be little more modern and denote the probability of success as p rather than 1/N. As we have shown, the chance of having at least one success in C trial is 1 - (1 - p)C. Now, consider the problem of finding a real (rather than integer) number x such that 1 - (1 - p)x = 1/2. Formulated this way, there is a unique value x for each value of p, and we can still easily find the critical value by taking the ceiling of x.

Solving for x, we get x = -log(2)/log(1-p). For small value of p, log(1-p) is close to -p, so we can approximate the value of x as log(2)/p. In fact, this is the estimation given by de Moivre in The Doctrine of Chances.

Going back to Old Gambler's Rule, let experiment 1 have the success probability p with critical value equal to the ceiling of x. Similarly, let experiment 2 have the success probability q with critical value equal to the ceiling of y. Then we have the relationship x/y = log(1-q)/log(1-p). If p and q are close to 0, we can give an approximate relationship of x/y = q/p. And finally, letting p = 1/N and q = 1/M, we get x/y = N/M, which looks similar to the Old Gambler's Rule (which stated C/D = N/M).

So we see there is a grain of truth in Old Gambler's Rule, but also why it failed for de Méré. First, the rule approximates log(1+p) as p, but this approximation has large error for large values of p. Secondly, the rule is stated in terms of integers rather than the real numbers, adding another layer of approximation error.

This post relied heavily on Oystein Ore's paper Pascal and the Invention of Probability Theory and Prakash Gorroochurn's book Classic Problems of Probability.

No comments: