A viscountess, a radio DJ, a reality star, a vlogger, a comedian, several sportspeople and an assortment of actors and presenters. These, more or less, are the celebrities lined up to compete in the 2019 season of Strictly Come Dancing.

Outside their day jobs, few people know much about them yet. But over the 13 weeks or so of shows up until Christmas, viewers will at least learn how well the contestants can dance. But how much will their success in the competition have to do with their foxtrot and to what extent will it be, literally, the luck of the draw that sees the victors lift the trophy in December?

A seminal study published in 2010 looked at public voting at the end of episodes of the various Idol television pop singing contests and found that singers who were later on in the bill got a disproportionately higher share of the public vote than those who had preceded them.

This was explained as a “recency effect” – meaning that those performing later are more recent in the memory of people who were judging or voting. Interestingly, a different study, of wine tasting, suggested that there is also a significant “primacy effect” which favours the wines that people taste first (as well, to some extent, as last).

## A little bias is in order

What would happen if the evaluation of each performance was carried out immediately after each performance instead of at the end – surely this would eliminate the benefit of going last as there would be equal recency in each case? The problem in implementing this is that the public need to see all the performers before they can choose which of them deserves their vote.

You might think the solution is to award a vote to each performer immediately after each performance – by complementing the public vote with the scores of a panel of expert judges. And, of course, Strictly Come Dancing (or Dancing with the Stars if you are in the US) does just this. So there should be no “recency effect” in the expert voting – because the next performer does not take to the stage until the previous performer has been scored.

We might expect in this case that the later performers taking to the dance floor should have no advantage over earlier performing contestants in the expert evaluations – and, in particular, there should be no “last dance” advantage.

We decided to test this out using a large data set of every performance ever danced on the UK and US versions of the show – going right back to the debut show in 2004. Our findings, published in Economics Letters, proved not only surprising, but almost a bit shocking.

## Last shall be first

Contrary to expectations, we found the same sequence order bias by the expert panel judges – who voted after each act – as by the general public, voting after all performances had concluded.

We applied a range of statistical tests to allow for the difference in quality of the various performers and as a result we were able to exclude quality as a reason for getting high marks. This worked for all but the opening spot of the night, which we found was generally filled by one of the better performers.

So the findings matched the Idol study in demonstrating that the last dance slot should be most coveted, but that the first to perform also scored better than expected. This resembles a J-curve where there are sequence order effects such that the first and later performing contestants disproportionately gained higher expert panel scores.

Although we believe the production team’s choice of opening performance may play a role in this, our best explanation of the key sequence biases is as a type of “grade inflation” in the expert panel’s scoring. In particular, we interpret the “order” effect as deriving from studio audience pressure – a little like the published evidence of unconscious bias exhibited by referees in response to spectator pressure. The influence on the judges of increasing studio acclaim and euphoria as the contest progresses to a conclusion is likely to be further exacerbated by the proximity of the judges to the audience.

When the votes from the general public augment the expert panel scores – as is the case in Strictly Come Dancing – the biases observed in the expert panel scores are amplified.

All of which means that, based on past series, the best place to perform is last and second is the least successful place to perform.

The implications of this are worrying if they spill over into the real world. Is there an advantage in going last (or first) into the interview room for a job – even if the applicants are evaluated between interviews? The same effects could have implications in so many situations, such as sitting down in a dentist’s chair or doctor’s surgery, appearing in front of a magistrate or having your examination script marked by someone with a huge pile of work to get through.

One study, reported in the New York Times in 2011, found that experienced parole judges granted freedom about 65% of the time to the first prisoner to appear before them on a given day, and the first after lunch – but to almost nobody by the end of a morning session.

So our research confirms what has long been suspected – that the order in which performers (and quite possibly interviewees) appear can make a big difference. So it’s now time to look more carefully at the potential dangers this can pose more generally for people’s daily lives – and what we can do to best address the problem.

The bus arrives every twenty minutes on average, though sometimes the interval between buses is a bit longer and sometimes a bit shorter. Still, it’s 20 minutes taken as an average, or an average of three buses an hour. So you emerge onto the main road from a side lane at some random time, and come straight upon the bus stop. How long can you expect to wait on average for the next bus to arrive?

The intuitive answer is 10 minutes, since this is exactly half way along the average interval between buses, and if your usual wait is rather longer than this, then you have been unlucky.

But is this right? The Inspection Paradox suggests that in most circumstances you will actually be quite lucky only to wait ten minutes for the next bus to arrive.

Let’s examine this more closely. The bus arrives every 20 minutes on average, or three times an hour on average. But that is only an average. If they actually do arrive at exactly 20 minute intervals, then your expected wait is indeed 10 minutes (the mid-point of the interval between the bus arrivals). But if there is any variation around that average, things change, for the worse.

Say for example, that half the time the buses arrive at a ten minute interval and half the time at a 30 minute interval. The overall average is now 20 minutes, but from your point of view it is three times more likely that you’ll turn up during the 30 minute interval than during the ten minute interval. Your appearance at the stop is random, and as such is more likely to take place during a long interval between two buses arriving than during a short interval. It is like randomly throwing a dart at a timeline 30 minutes long. You could well hit the ten minute interval but it is much more likely that you will hit the 30 minute interval.

So let’s see what this means for our expected wait time. If you randomly arrive during the long (30 minute) interval, you can expect to wait 15 minutes. If you randomly arrive during the short (10 minute) interval, you can expect to wait 5 minutes. But there is three times the chance you will arrive during the long interval, and therefore three times the chance of waiting 15 minutes as five minutes. So you expected wait is 3×15 minutes plus 1x 5 minutes, divided by four. This equals 50 divided by 4 or 12.5 minutes.

In conclusion, the buses arrive on average every 20 minutes but your expected wait time is not half of that (10 minutes) but more in every case except when the buses arrive at exact 20 minute intervals. The greater the dispersion around the average, the greater the amount by which your expected wait time exceeds the average wait time. This is the ‘Inspection Paradox’, which states than whenever you ‘inspect’ a process you are likely to find that things take (or last) longer than their ‘uninspected’ average. What seems like the persistence of bad luck is actually the laws of probability and statistics playing out their natural course.

Once made aware of the paradox, it seems to appear everywhere.

For example, take the case where the average class size at an institution is 30 students. If you decide to interview random students from the institution, and ask them how big is their class size, you will usually obtain an average rather higher than 30. Let’s take a stylised example to explain why. Say that the institution has class sizes of either ten or 50, and there are equal numbers of both class sizes. So the overall average class size is 30. But in selecting a random student, it is five times more likely that he or she will come from a class of 50 students than of ten students. So for every one student who replies ‘10’ to your enquiry about their class size, there will be five who answer ’50.’ So the average class size thrown up by your survey is 5×50 + 1 x 10, divided by 6. This equals 260/6 = 43.3. So the act of inspecting the class sizes actually increases the average obtained compared to the uninspected average. The only circumstance in which the inspected and uninspected average coincides is when every class size is equal.

The range of real-life cases where this occurs is almost boundless. For example, you visit the gym at a random time of day and ask a random sample of those who are there how long they normally exercise for. The answer you obtain will likely well exceed the average of all those who attend the gym that day because it is more likely that when you turn up you will come across those who exercise for a long time than a short time.

Once you know about the Inspection Paradox, the world and our perception of our place in it, is never quite the same again.

**Exercise**

You arrive at someone’s home and are ushered into the garden. You know that a train passes the end of the garden every half an hour on average but the trains are actually scheduled so that half pass by with an interval of a quarter of an hour and half with an interval of 45 minutes. Given that you have no clue when the last train passed by and the scheduled interval between that train and the next, how long can you expect to wait for the next train?

**Links and References**

Amir D. Aczel. Chance: A Guide to Gambling, Love, the Stock market and Just About Everything Else. 18 May, 2016. NY: Thunder’s Mouth Press.

On the Persistence of Bad Luck (and Good). Amir Aczel. Sept. 4, 2013. http://blogs.discovermagazine.com/crux/2013/09/04/on-the-persistence-of-bad-luck-and-good/#.XXJL0ihKh3g

The Waiting Time Paradox, or, Why is My Bus Always Late? https://jakevdp.github.io/blog/2018/09/13/waiting-time-paradox/

Probably Overthinking It. August 18, 2015. The Inspection Paradox is Everywhere. http://allendowney.blogspot.com/2015/08/the-inspection-paradox-is-everywhere.html

To illustrate the Expected Value Paradox, let us propose a coin-tossing game, in which you gain 50% of what you bet if the coin lands Heads and lose 40% if it lands Tails. What is the expected value of a single play of this game?

The Expected Value can be calculated as the sum of the probabilities of each possible outcome in the game times the return if that outcome occurs.

Say, for example, the unit stake for each play of the game is £10. In this case, the gain if the coin lands Heads is 50% x £10 = £5, and the loss if the coin lands Tails is 40% x £10 = £4.

In this case, the expected value (given a fair coin, with 0.5 chance of Heads and 0.5 chance of Tails) = 0.5 x £5 – 0.5 x £4 = £0.5, or 50 pence.

So the Expected Value of the game is 5%. This is the positive net expectation for each play of the game (toss of the coin).

Let’s see how this plays out in an actual experiment in which 100 people play the game. What do we expect would be the average final balance of the players?

The expected gain from the 50 players tossing Heads = 50 x £5 = £250.

The expected loss from the 50 players tossing Tails = 50 x £4 = £200.

So, the net gain over 100 players = £250 – £200 = £50.

The average net gain of the 100 player = £50/100 = £0.5, or 50 pence.

Expected Value = 0.5 x £1.5 + 0.5 x 60p. = £1.05. As above, this is an expected gain of 5%.

From two coin tosses, our best estimate is 25 Heads-Heads, 25 Tails-Tails, 25 Heads-Tails and 25 Tails-Heads.

The Expected Value over the two coin tosses = 0.25 x (1.5)^{2 }+ 0.25 x (0.6)^{2} + 0.25 (1.5 x 0.6) + 0.25 (0.6 x 1.5) = £1.0575.

However many coin tosses the group throws, the Expected Value is positive.

Take now the case of one person playing the game through time. Say there are four coin tosses, for a stake of £10.

From four coin tosses, our best estimate is 2 Heads and 2 Tails.

Expected value for 2 Heads and 2 Tails = £10 x 1.5 x 1.5 x 0.6 x 0.6.

Expected value goes from £10 to £15 to £22.50 to £13.50 to £8.10. This is a net loss.

To clarify, we bet £10. The coin lands Heads. We now have £15. We bet £15 now on the next coin toss. It lands Heads again. We now have £22.50. We bet £22.50 now on the next coin toss. It lands Tails. Now we are back to £13.50. We bet this £13.50 on the next coin toss. It lands Tails again and we are down to £8.10. This is a net loss on the original stake of £10.

If we throw the same number of Heads and Tails after tossing the coin N times, we would expect more generally to earn the following.

1.5^{N/2 }x 0.6^{N/2 }= (1.5 x 0.6)^{N/2 }= 0.9^{N/2}

Eventually, all the stack used for betting is lost.

Herein lies the paradox. When many people play the game a fixed number of times, the average return is positive, but when a fixed number of people play the game many times, they should expect to lose most of their money.

This is a demonstration of the difference between what is termed ‘time averaging’ and ‘ensemble averaging.’

Thinking of the game as a random process, time averaging is taking the average value as the process continues. Ensemble averaging is taking the average value of many processes running for some fixed amount of time.

Processes where there is a difference between time and ensemble averaging are called ‘ergodic processes.’ In the real world, however, many processes, including notably in finance, are non-ergodic.

Say that in an election two parties, A and B, attract some percentage of voters, x% and y% respectively. This is not the same thing as saying that over the course of their voting lives, each individual votes for party A in x% of elections and for party B in y% of elections. These two concepts are distinct.

Again, if we wish to determine the most visited parts of a city, we could take a snapshot in time of how many people are in neighbourhood A, how many in neighbourhood B, etc. Alternatively, we could follow a particular individual or a few individuals, over a period of time and see how often they visit neighbourhood A, neighbourhood B, etc. The first analysis (the ensemble) may not be representative over a period of time, while the second (time) may not be representative of all the people.

An ergodic process is one which in which the two types of statistic give the same results. In an ergodic system, time is irrelevant and has no direction. Say, for example, that 100 people rolled a die once, and the total of the scores is divided by 100. This finite-time average approaches the ensemble average as more and more people are included in the sample. Now, take the case of a single person rolling a die 100 times, and the total scored is divided by 100. This finite-time average would eventually approach the time average.

An implication of ergodicity is that the result ensemble averaging will be the same as time averaging.

And here is the key point: In the case of ensemble averages, it is the size of the sample that eventually removes the randomness from the sample. In the case of time averages, it is the time devoted to the process that removes randomness.

In the dice rolling example, both methods give the same answer, subject to errors. In this sense, rolling dice is an ergodic system.

However, if we now bet on the results of the dice rolling game, wealth does not follow an ergodic system. If a player goes bankrupt, he stays bankrupt, so the time average of wealth can approach zero over time as time passes, even though the ensemble value of wealth may increase.

As a new example take the case of 100 people visiting a casino, with a certain amount of money. Some may win, some may lose, but we can infer the house edge by counting the average percentage loss of the 100 people. This is the ensemble average. This is different to one person going to the casino 100 days in a row, starting with a set amount. The probabilities of success derived from a collection of people does not apply to one person. The first is the ‘ensemble probability’, the second is the ‘time probability’ (the second is concerned with a single person through time).

Here is the key point: No individual person has sure access to the returns of the market without infinite pockets and an absence of so-called ‘uncle points’ (the point at which he needs, or feels the need, to exit the game). To equate the two is to confuse ensemble averaging with time averaging.

If the player/investor has to reduce exposure because of losses, or maybe retirement or other change of circumstances, his returns will be divorced from those of the market or the game. The essential point is that success first requires survival. This applies to an individual in a different sense to the ensemble.

So where does the money lost by the non-survivors go? It gets transferred to the survivors, some of whom tend to scoop up much or most of the pool, i.e. the money is scoped up by the tail probability of those who keep surviving, which may just be by blind good luck, just as the non-survivors may have been forced out of the game/market by blind bad luck. So the lucky survivors (and in particular the tail-end very lucky survivors) more than compensate for the effect of the unlucky entrants.

The so-called Kelly approach to investment strategy, discussed in a separate chapter, is an investment approach which seeks to respond to the survivor issue.

Say, for example, that the probability of Heads from a coin toss is 0.6, and Heads wins a dollar, but Tails (with a probability of 0.4) loses a dollar. Although the Expected Value of this game is positive, if the response of an investor in the game is to stake all their bankroll on each toss of the coin, the expected time until bankroll bankruptcy is just 1/(1-0.6) = 2.5 tosses of the coin.

The Kelly strategy to optimise the growth rate if the bankroll is to invest a fraction of the bankroll equal to the difference in the likelihood you will win or lose.

In the above example, it means we should in each game bet the fraction of x = 0.6 – 0.4 = 0.2 of the bankroll.

The optimal average growth rate becomes: 0.6 log (1.2) + 0.4 log (0.8) = 0.2.

If we bet all our bankroll on each coin toss, we will most likely lose the bankroll. This is balanced out over all players by those who with low probability win a large bankroll. For the real-life player, however, it is most relevant to look at the time-average of what may be expected to be won.

In trying to maximise Expected Value, the probability of bankroll bankruptcy soon gets close to one. It is better to invest, say, 20% of bankroll in each game, and maximise long-term average bankroll growth.

In the coin-toss example, it is like supposing that various “I”s are tossing a coin, and the losses of the many of them are offset by the huge profit of the relatively small number of “I”s who do win. But this ensemble-average does not work for an individual for whom a time-average better reflects the one timeline in which that individual exists.

Put another way, because the individual cannot go back in time and the bankruptcy option is always actual, it is not possible to realise the small chance of making the tail-end upside of the positive expectation value of a game/investment without taking on the significant risk of non-survival/bankruptcy. In other words, the individual lives in one universe, on one time path, and so is faced with the reality of time-averaging as opposed to an ensemble average in which one can call upon the gains of parallel investors/game players on parallel timelines in essentially parallel worlds.

To summarise, the difference between 100 people going to a casino and one person going to the casino 100 times is the difference between understanding probability in conventional terms and through the lens of path dependency.

*References and Links*

Time for a change: Introducing irreversible time in economics. __https://www.gresham.ac.uk/lectures-and-events/time-for-a-change-introducing-irreversible-time-in-economics__

What is ergodicity? __https://larspsyll.wordpress.com/2016/11/23/what-is-ergodicity-2/__

Non-ergodic economics, expected utility and the Kelly criterion. __https://larspsyll.wordpress.com/2012/04/21/non-ergodic-economics-expected-utility-and-the-kelly-criterion/__

Ergodicity. __http://squidarth.com/math/2018/11/27/ergodicity.html__

Ergodicity. http://nassimtaleb.org/tag/ergodicity/

x = 70% minus 30% = 40%.

One of the most celebrated pieces of correspondence in the history of probability and gambling, and one of which I am particularly fond, involves an exchange of letters between the greatest diarist of all time, Samuel Pepys, and the greatest scientist of all time, Sir Isaac Newton.

The six letters exchanged between Pepys in London and Newton in Cambridge related to a problem posed to Newton by Pepys about gambling odds. The interchange took place between November 22 and December 23, 1693. The ostensible reason for Mr. Pepys’ interest was to encourage the thirst for truth of his young friend, Mr. Smith. Whether Sir Isaac believed that tale or not we shall never know. The real reason, however, was later revealed in a letter written to a confidante by Pepys indicating that he himself was about to stake 10 pounds, a considerable sum in 1693, on such a bet. Now we’re talking!

The first letter to Newton introduced Mr. Smith as a fellow with a “general reputation…in this towne (inferiour to none, but superiour to most) for his maistery [of]…Arithmetick”.

What emerged has come down to us as the aptly named Newton-Pepys problem.

Essentially, the question came down to this:

Which of the following three propositions has the greatest chance of success.

- Six fair dice are tossed independently and at least one ‘6’ appears
- 12 fair dice are tossed independently and at least two ‘6’s appear.
- 18 fair dice are tossed independently and at least three ‘6’s appear.

Pepys was convinced that C. had the highest probability and asked Newton to confirm this.

Newton chose A as the highest probability, then B, then C, and produced his calculations for Pepys, who wouldn’t accept them.

So who was right? Newton or Pepys?

Well, let’s see.

The first problem is the easiest to solve.

What is the probability of A?

Probability that one toss of a coin produces a ‘6’ = 1/6

So probability that one toss of a coin does not produce a ‘6’ = 5/6

So probability that six independent tosses of a coin produces no ‘6’ = (5/6)^{6}

So probability of AT LEAST one ‘6’ in 6 tosses = 1 – (5/6)^{6} = 0.6651

So far, so good.

The probability of problem B and probability of problem C are more difficult to calculate and involve use of the binomial distribution, though Newton derived the answers from first principles, by his method of ‘Progressions’.

Both methods give the same answer, but using the more modern binomial distribution is easier.

So let’s do it, along the way by introducing the idea of so-called ‘Bernoulli trials’.

The nice thing about a Bernoulli trial is that it has only two possible outcomes.

Each outcome can be framed as a ‘yes’ or ‘no’ question (success or failure).

Let probability of success = p.

Let probability of failure = 1-p.

Each trial is independent of the others and the probability of the two outcomes remains constant for every trial.

An example is tossing a coin. Will it lands heads?

Another example is rolling a die. Will it come up ‘6’?

Yes = success (S); No = failure (F).

Let probability of success, P (S) = p; probability of failure, P (F) = 1-p.

So the question: How many Bernoulli trials are needed to get to the first success?

This is straightforward, as the only way to need exactly five trials, for example, is to begin with four failures, i.e. FFFFS.

Probability of this = (1-p) (1-p) (1-p) (1-p) p = (1-p)^{4 }p

Similarly, the only way to need exactly six trials is to begin with five failures, i.e. FFFFFS.

Probability of this = (1-p) (1-p) (1-p) (1-p) (1-p) p = (1-p)^{5} p

More generally, the probability that success starts on trial number n =

(1-p)^{n-1} p

This is a geometric distribution. This distribution deals with the number of trials required for a single success.

But what is the chance that the first success takes AT LEAST some number of trials, say 12 trials?

One method is to add the probability of 12 trials to prob. of 13 trials to prob. of 14 trials to prob. of 15 trials, etc. …………………………

Easier method: The only time you will need ** at least **12 trials is when the first 11 trials are all failures, i.e. (1-p)

^{11}

In a sequence of Bernoulli trials, the probability that the first success takes ** at least **n trials is (1-p)

^{n-1}

Let’s take a couple of examples.

Probability that the first success (heads on coin toss) takes at least three trials (tosses of the coin)= (1-0.5)^{2} = 0.25

Probability that the first success (heads on coin toss) takes at least four trials (tosses of the coin)= (1-0.5)^{3} = 0.125

But so far we have only learned how to calculate the probability of one success in so many trials.

What if we want to know the probability of two, or three, or however many successes?

To take an example, what is the probability of exactly two ‘6’s in five throws of the die?

To determine this, we need to calculate the number of ways two ‘6’s can occur in five throws of the die, and multiply that by the probability of each of these ways occurring.

So, probability = number of ways something can occur multiplied by probability of each way occurring.

How many ways can we throw two ‘6’s in five throws of the die?

Where S = Success in throwing a ‘6’, F = Fail in throwing a ‘6’, we have:

SSFFF; SFSFF; SFFSF; SFFFS; FSSFF; FSFSF; FSFFS; FFSSF; FFSFS; FFFSS

So there are 10 ways of throwing two ‘6’s in five throws of the dice.

More formally, we are seeking to calculate how many ways 2 things can be chosen from 5. This is known as ‘5 Choose 2’, written as:

^{5 }C _{2}= 10

More generally, the number of ways k things can be chosen from n is:

^{n}C _{k} = n! / (n-k)! k!

n! (known as n factorial) = n (n-1) (n-2) … 1

k! (known as k factorial) = k (k-1) (k-2) … 1

Thus, ^{5}C _{2} = 5! / 3! 2! = 5x4x3x2x1 / (3x2x1x2x1) = 5×4/(2×1) = 20/2=10

So what is the probability of throwing exactly two ‘6’s in five throws of the die, in each of these ten cases? p is the probability of success. 1-p is the probability of failure.

In each case, the probability = p.p.(1-p).(1-p).(1-p)

= p^{2} (1-p)^{3}

Since there are ^{5} C _{2 }such sequences, the probability of exactly 2 ‘6’s =

10 p^{2 }(1-p)^{3}

Generally, in a fixed sequence of n Bernoulli trials, the probability of exactly r successes is:

^{n}C _{r} x p^{r} (1-p) ^{n-r}

This is the binomial distribution. Note that it requires that the probability of success on each trial be constant. It also requires only two possible outcomes.

So, for example, what is the chance of exactly 3 heads when a fair coin is tossed 5 times?

^{5}C _{3} x (1/2)^{3} x (1/2)^{2} = 10/32 = 5/16

And what is the chance of exactly 2 sixes when a fair die is rolled five times?

^{5 }C _{2}x (1/6)^{2} x (5/6)^{3} = 10 x 1/36 x 125/216 = 1250/7776 = 0.1608

So let’s now use the binomial distribution to solve the Newton-Pepys problem.

- What is the probability of obtaining
one six with 6 dice?*at least* - What is the probability of obtaining
two sixes with 12 dice?*at least* - What is the probability of obtaining
three sizes with 18 dice?*at least*

First, what is the probability of no sixes with 6 dice?

P (no sixes with six dice) = ^{n} C _{x }. (1/6)^{x} . (5/6)^{n-x, }x = 0,1,2,…,n

Where x is the number of successes.

So, probability of no successes (no sixes) with 6 dice =

n!/(n-k)!k! = 6!/(6-0)!0! x (1/6)^{0} . (5/6)^{6-0} = 6!/6! X 1 x 1 x (5/6)^{6 = }(5/6)^{6}

*Note that: 0! = 1*

*Here’s the proof: n! = n. (n-1)! *

*At n=1, 1! = 1. (1-1)! *

*So 1 = 0!*

So, where x is the number of sixes, probability of at least one six is equal to ‘1’ minus the probability of no sixes, which can be written as:

P (x≥ 1) = 1 – P(x=0) = 1 – (5/6)^{6 }= 0.665 (to three decimal places).

i.e. probability of at least one six = 1 minus the probability of no sixes.

That is a formal solution to Part 1 of the Newton-Pepys Problem.

*Now on to Part 2.*

Probability of at least two sixes with 12 dice is equal to ‘1’ minus the probability of no sixes minus the probability of exactly one six.

This can be written as:

P (x≥2) = 1 – P(x=0) – P(x=1)

P(x=0) in 12 throws of the dice = (5/6)^{12}

P (x=1) in 12 throws of the dice = ^{12} C _{1} . (1/6)^{1} . (5/6)^{11n}C _{k} = n! / (n-k)! k!

So ^{12} C _{1 }

= 12! / (12-1)! 1! = 12! / 11! 1! = 12

So, P (x≥2) = 1 – (5/6)^{12 }– 12. (1/6) . (5/6)^{11 }

= 1 – 0.112156654 – 2 . (0.134587985) = 0.887843346 – 0.26917597 =

= 0.618667376 = 0.619 (to 3 decimal places)

This is a formal solution to Part 2 of the Newton-Pepys Problem.

*Now on to Part 3.*

Probability of at least three sixes with 18 dice is equal to ‘1’ minus the probability of no sixes minus the probability of exactly one six minus the probability of at exactly two sixes.

This can be written as:

P (x≥3) = 1 – P(x=0) – P(x=1) – P(x=2)

P(x=0) in 18 throws of the dice = (5/6)^{18}

P (x=1) in 18 throws of the dice = ^{18} C _{1} . (1/6)^{1} . (5/6)^{17}

^{n}C _{k} = n! / (n-k)! k!

So ^{18} C _{1}

= 18! / (18-1)! 1! = 18

So P (x=1) = 18. (1/6)^{1} . (5/6)^{17}

P (x=2) = ^{18 }C _{2 .} (1/6)^{2} .(5/6)^{16}

^{18 }C _{2 }

_{ }= 18! / (18-2)! 2! = 18!/16! 2! = 18. (17/2)

So P (x=2) = 18. (17/2) (1/6)^{2 }(5/6)^{16}

So P(x=3) = 1 – P (x=0) – (P(x=1) – P (x=2)

P (x=0) = (5/6)^{18}

= 0.0375610365

P (x=1) = 18. 1/6. (0.0450732438) = 0.135219731

P (x=2) = 18. (17/2) (1/36) (0.0540878926) = 0.229873544

So P(x=3) = 1 – 0.0375610365 – 0.135219731 – 0.229873544 =

P(x≥3) = 0.597345689 = 0.597 (to 3 decimal places, )

This is a formal solution to Part 3 of the Newton-Pepys Problem.

So, to re-state the Newton-Pepys problem.

Which of the following three propositions has the greatest chance of success?

- Six fair dice are tossed independently and at least one ‘6’ appears.
- 12 fair dice are tossed independently and at least two ‘6’s appear.
- 18 fair dice are tossed independently and at least three ‘6’s appear.

Pepys was convinced that C. had the highest probability and asked Newton to confirm this.

Newton chose A, then B, then C, and produced his calculations for Pepys, who wouldn’t accept them.

So who was right? Newton or Pepys?

According to our calculations, what is the probability of A? 0.665

What is the probability of B? 0.619

What is the probability of C? 0.597

So Sir Isaac’s solution was right. Samuel Pepys was wrong, a wrong compounded by refusing to accept Newton’s solution. How much he lost gambling on his misjudgement is mired in the mists of history. The Newton-Pepys Problem is not, and continues to tease our brains to this very day.

*References and Links*

Newton and Pepys. DataGenetics. http://datagenetics.com/blog/february12014/index.html

Newton-Pepys problem. Wikipedia. https://en.wikipedia.org/wiki/Newton%E2%80%93Pepys_problem

*Solution to Exercise*

**Question 1. **You should switch to either the purple box or the magenta box.

There was a 1 in 4 chance at the outset that your original choice, the red box, contained the prize. This does not change when I open the box which I know to be empty. There was a 3 in 4 chance that it was either the orange box, the purple box or the magenta box before I opened the box and by opening the orange box, which I know to be empty, that can be eliminated. So the chance it is either the purple box or the magenta box is now 3 in 4 in total (or 3/8 each), compared to 1 in 4 for your original choice, the red box.

**Question 2. **It makes no difference whether you switch or not.

There was a 1 in 4 chance at the outset that your original choice, the black box, contained the prize. There was a 3 in 4 chance that it was either the white box, the grey box or the brown box. By randomly opening a box (I don’t know which box contains the prize), I am giving you no new information. It is the same as asking you to choose a box to open. If you randomly opened the white box, which might have contained the prize, this means there are now two boxes left (grey and brown). Each of these started with a 1 in 4 chance of containing the prize. I have not deliberately eliminated a box potentially containing the prize, so I have given you no new information to indicate which box contains the prize. So the chance of each of the remaining boxes rises to 1/3 in each case. So it makes no difference whether you switch or not.

** **