Skip to content

The Expected Value Paradox – in a nutshell.

To illustrate the Expected Value Paradox, let us propose a coin-tossing game, in which you gain 50% of what you bet if the coin lands Heads and lose 40% if it lands Tails. What is the expected value of a single play of this game?

The Expected Value can be calculated as the sum of the probabilities of each possible outcome in the game times the return if that outcome occurs.

Say, for example, the unit stake for each play of the game is £10. In this case, the gain if the coin lands Heads is 50% x £10 = £5, and the loss if the coin lands Tails is 40% x £10 = £4.

In this case, the expected value (given a fair coin, with 0.5 chance of Heads and 0.5 chance of Tails) = 0.5 x £5 – 0.5 x £4 = £0.5, or 50 pence.

So the Expected Value of the game is 5%. This is the positive net expectation for each play of the game (toss of the coin).

Let’s see how this plays out in an actual experiment in which 100 people play the game. What do we expect would be the average final balance of the players?

The expected gain from the 50 players tossing Heads = 50 x £5 = £250.

The expected loss from the 50 players tossing Tails = 50 x £4 = £200.

So, the net gain over 100 players = £250 – £200 = £50.

The average net gain of the 100 player = £50/100 = £0.5, or 50 pence.

Expected Value = 0.5 x £1.5 + 0.5 x 60p. = £1.05. As above, this is an expected gain of 5%.

From two coin tosses, our best estimate is 25 Heads-Heads, 25 Tails-Tails, 25 Heads-Tails and 25 Tails-Heads.

The Expected Value over the two coin tosses = 0.25 x (1.5)2 + 0.25 x (0.6)2 + 0.25 (1.5 x 0.6) + 0.25 (0.6 x 1.5) = £1.0575.

However many coin tosses the group throws, the Expected Value is positive.

Take now the case of one person playing the game through time. Say there are four coin tosses, for a stake of £10.

From four coin tosses, our best estimate is 2 Heads and 2 Tails.

Expected value for 2 Heads and 2 Tails = £10 x 1.5 x 1.5 x 0.6 x 0.6.

Expected value goes from £10 to £15 to £22.50 to £13.50 to £8.10. This is a net loss.

To clarify, we bet £10. The coin lands Heads. We now have £15. We bet £15 now on the next coin toss. It lands Heads again. We now have £22.50. We bet £22.50 now on the next coin toss. It lands Tails. Now we are back to £13.50. We bet this £13.50 on the next coin toss. It lands Tails again and we are down to £8.10. This is a net loss on the original stake of £10.

If we throw the same number of Heads and Tails after tossing the coin N times, we would expect more generally to earn the following.

1.5N/2 x 0.6N/2 = (1.5 x 0.6)N/2 = 0.9N/2

Eventually, all the stack used for betting is lost.

Herein lies the paradox. When many people play the game a fixed number of times, the average return is positive, but when a fixed number of people play the game many times, they should expect to lose most of their money.

This is a demonstration of the difference between what is termed ‘time averaging’ and ‘ensemble averaging.’

Thinking of the game as a random process, time averaging is taking the average value as the process continues. Ensemble averaging is taking the average value of many processes running for some fixed amount of time.

Processes where there is a difference between time and ensemble averaging are called ‘ergodic processes.’ In the real world, however, many processes, including notably in finance, are non-ergodic.

Say that in an election two parties, A and B, attract some percentage of voters, x% and y% respectively. This is not the same thing as saying that over the course of their voting lives, each individual votes for party A in x% of elections and for party B in y% of elections. These two concepts are distinct.

Again, if we wish to determine the most visited parts of a city, we could take a snapshot in time of how many people are in neighbourhood A, how many in neighbourhood B, etc. Alternatively, we could follow a particular individual or a few individuals, over a period of time and see how often they visit neighbourhood A, neighbourhood B, etc. The first analysis (the ensemble) may not be representative over a period of time, while the second (time) may not be representative of all the people.

An ergodic process is one which in which the two types of statistic give the same results. In an ergodic system, time is irrelevant and has no direction. Say, for example, that 100 people rolled a die once, and the total of the scores is divided by 100. This finite-time average approaches the ensemble average as more and more people are included in the sample. Now, take the case of a single person rolling a die 100 times, and the total scored is divided by 100. This finite-time average would eventually approach the time average.

An implication of ergodicity is that the result ensemble averaging will be the same as time averaging.

And here is the key point: In the case of ensemble averages, it is the size of the sample that eventually removes the randomness from the sample. In the case of time averages, it is the time devoted to the process that removes randomness.

In the dice rolling example, both methods give the same answer, subject to errors. In this sense, rolling dice is an ergodic system.

However, if we now bet on the results of the dice rolling game, wealth does not follow an ergodic system. If a player goes bankrupt, he stays bankrupt, so the time average of wealth can approach zero over time as time passes, even though the ensemble value of wealth may increase.

As a new example take the case of 100 people visiting a casino, with a certain amount of money. Some may win, some may lose, but we can infer the house edge by counting the average percentage loss of the 100 people. This is the ensemble average. This is different to one person going to the casino 100 days in a row, starting with a set amount. The probabilities of success derived from a collection of people does not apply to one person. The first is the ‘ensemble probability’, the second is the ‘time probability’ (the second is concerned with a single person through time).

Here is the key point: No individual person has sure access to the returns of the market without infinite pockets and an absence of so-called ‘uncle points’ (the point at which he needs, or feels the need, to exit the game). To equate the two is to confuse ensemble averaging with time averaging.

If the player/investor has to reduce exposure because of losses, or maybe retirement or other change of circumstances, his returns will be divorced from those of the market or the game. The essential point is that success first requires survival. This applies to an individual in a different sense to the ensemble.

So where does the money lost by the non-survivors go? It gets transferred to the survivors, some of whom tend to scoop up much or most of the pool, i.e. the money is scoped up by the tail probability of those who keep surviving, which may just be by blind good luck, just as the non-survivors may have been forced out of the game/market by blind bad luck. So the lucky survivors (and in particular the tail-end very lucky survivors) more than compensate for the effect of the unlucky entrants.

The so-called Kelly approach to investment strategy, discussed in a separate chapter, is an investment approach which seeks to respond to the survivor issue.

Say, for example, that the probability of Heads from a coin toss is 0.6, and Heads wins a dollar, but Tails (with a probability of 0.4) loses a dollar. Although the Expected Value of this game is positive, if the response of an investor in the game is to stake all their bankroll on each toss of the coin, the expected time until bankroll bankruptcy is just 1/(1-0.6) = 2.5 tosses of the coin.

The Kelly strategy to optimise the growth rate if the bankroll is to invest a fraction of the bankroll equal to the difference in the likelihood you will win or lose.

In the above example, it means we should in each game bet the fraction of x = 0.6 – 0.4 = 0.2 of the bankroll.

The optimal average growth rate becomes: 0.6 log (1.2) + 0.4 log (0.8) = 0.2.

If we bet all our bankroll on each coin toss, we will most likely lose the bankroll. This is balanced out over all players by those who with low probability win a large bankroll. For the real-life player, however, it is most relevant to look at the time-average of what may be expected to be won.

In trying to maximise Expected Value, the probability of bankroll bankruptcy soon gets close to one. It is better to invest, say, 20% of bankroll in each game, and maximise long-term average bankroll growth.

In the coin-toss example, it is like supposing that various “I”s are tossing a coin, and the losses of the many of them are offset by the huge profit of the relatively small number of “I”s who do win. But this ensemble-average does not work for an individual for whom a time-average better reflects the one timeline in which that individual exists.

Put another way, because the individual cannot go back in time and the bankruptcy option is always actual, it is not possible to realise the small chance of making the tail-end upside of the positive expectation value of a game/investment without taking on the significant risk of non-survival/bankruptcy. In other words, the individual lives in one universe, on one time path, and so is faced with the reality of time-averaging as opposed to an ensemble average in which one can call upon the gains of parallel investors/game players on parallel timelines in essentially parallel worlds.

To summarise, the difference between 100 people going to a casino and one person going to the casino 100 times is the difference between understanding probability in conventional terms and through the lens of path dependency.

 

References and Links

Time for a change: Introducing irreversible time in economics. https://www.gresham.ac.uk/lectures-and-events/time-for-a-change-introducing-irreversible-time-in-economics

What is ergodicity? https://larspsyll.wordpress.com/2016/11/23/what-is-ergodicity-2/

Non-ergodic economics, expected utility and the Kelly criterion. https://larspsyll.wordpress.com/2012/04/21/non-ergodic-economics-expected-utility-and-the-kelly-criterion/

Ergodicity. http://squidarth.com/math/2018/11/27/ergodicity.html

Ergodicity. http://nassimtaleb.org/tag/ergodicity/

Solutions: Game Theory – Nash Equilibrium – in a nutshell.

  1. In the ‘Live or Die’ scenario, there are two Nash equilibria (both drive on right side of the road or both drive on left side of the road) but no dominant strategy equilibrium. This is an example of the more general rule that not every Nash equilibrium is a dominant strategy. Every dominant strategy equilibrium is, however, a Nash equilibrium, as there is no incentive for the parties to deviate from the equilibrium in the case of a dominant strategy scenario (where the optimal strategy is defined regardless of what the other party does), such as the Prisoner’s dilemma.
  2. Steal-Steal is the dominant strategy equilibrium and also the Nash equilibrium.

Solution: Kelly Criterion – in a nutshell.

x = 70% minus 30% = 40%.

The Newton-Pepys Problem – in a nutshell.

One of the most celebrated pieces of correspondence in the history of probability and gambling, and one of which I am particularly fond, involves an exchange of letters between the greatest diarist of all time, Samuel Pepys, and the greatest scientist of all time, Sir Isaac Newton.

The six letters exchanged between Pepys in London and Newton in Cambridge related to a problem posed to Newton by Pepys about gambling odds. The interchange took place between November 22 and December 23, 1693. The ostensible reason for Mr. Pepys’ interest was to encourage the thirst for truth of his young friend, Mr. Smith. Whether Sir Isaac believed that tale or not we shall never know. The real reason, however, was later revealed in a letter written to a confidante by Pepys indicating that he himself was about to stake 10 pounds, a considerable sum in 1693, on such a bet. Now we’re talking!

The first letter to Newton introduced Mr. Smith as a fellow with a “general reputation…in this towne (inferiour to none, but superiour to most) for his maistery [of]…Arithmetick”.

What emerged has come down to us as the aptly named Newton-Pepys problem.

Essentially, the question came down to this:

Which of the following three propositions has the greatest chance of success.

  1. Six fair dice are tossed independently and at least one ‘6’ appears
  2. 12 fair dice are tossed independently and at least two ‘6’s appear.
  3. 18 fair dice are tossed independently and at least three ‘6’s appear.

Pepys was convinced that C. had the highest probability and asked Newton to confirm this.

Newton chose A as the highest probability, then B, then C, and produced his calculations for Pepys, who wouldn’t accept them.

So who was right? Newton or Pepys?

Well, let’s see.

The first problem is the easiest to solve.

What is the probability of A?

Probability that one toss of a coin produces a ‘6’ = 1/6

So probability that one toss of a coin does not produce a ‘6’ = 5/6

So probability that six independent tosses of a coin produces no ‘6’ = (5/6)6

So probability of AT LEAST one ‘6’ in 6 tosses = 1 – (5/6)6 = 0.6651

So far, so good.

The probability of problem B and probability of problem C are more difficult to calculate and involve use of the binomial distribution, though Newton derived the answers from first principles, by his method of ‘Progressions’.

Both methods give the same answer, but using the more modern binomial distribution is easier.

So let’s do it, along the way by introducing the idea of so-called ‘Bernoulli trials’.

The nice thing about a Bernoulli trial is that it has only two possible outcomes.

Each outcome can be framed as a ‘yes’ or ‘no’ question (success or failure).

Let probability of success = p.

Let probability of failure = 1-p.

Each trial is independent of the others and the probability of the two outcomes remains constant for every trial.

An example is tossing a coin. Will it lands heads?

Another example is rolling a die. Will it come up ‘6’?

Yes = success (S); No = failure (F).

Let probability of success, P (S) = p; probability of failure, P (F) = 1-p.

So the question: How many Bernoulli trials are needed to get to the first success?

This is straightforward, as the only way to need exactly five trials, for example, is to begin with four failures, i.e. FFFFS.

Probability of this = (1-p) (1-p) (1-p) (1-p) p = (1-p)4 p

Similarly, the only way to need exactly six trials is to begin with five failures, i.e. FFFFFS.

Probability of this = (1-p) (1-p) (1-p) (1-p) (1-p) p = (1-p)5 p

More generally, the probability that success starts on trial number n =

(1-p)n-1 p

This is a geometric distribution. This distribution deals with the number of trials required for a single success.

But what is the chance that the first success takes AT LEAST some number of trials, say 12 trials?

One method is to add the probability of 12 trials to prob. of 13 trials to prob. of 14 trials to prob. of 15 trials, etc.  …………………………

Easier method: The only time you will need at least 12 trials is when the first 11 trials are all failures, i.e. (1-p)11

In a sequence of Bernoulli trials, the probability that the first success takes at least n trials is (1-p)n-1

Let’s take a couple of examples.

Probability that the first success (heads on coin toss) takes at least three trials (tosses of the coin)= (1-0.5)2 = 0.25

Probability that the first success (heads on coin toss) takes at least four trials (tosses of the coin)= (1-0.5)3 = 0.125

But so far we have only learned how to calculate the probability of one success in so many trials.

What if we want to know the probability of two, or three, or however many successes?

To take an example, what is the probability of exactly two ‘6’s in five throws of the die?

To determine this, we need to calculate the number of ways two ‘6’s can occur in five throws of the die, and multiply that by the probability of each of these ways occurring.

So, probability = number of ways something can occur multiplied by probability of each way occurring.

How many ways can we throw two ‘6’s in five throws of the die?

Where S = Success in throwing a ‘6’, F = Fail in throwing a ‘6’, we have:

SSFFF; SFSFF; SFFSF; SFFFS; FSSFF; FSFSF; FSFFS; FFSSF; FFSFS; FFFSS

So there are 10 ways of throwing two ‘6’s in five throws of the dice.

More formally, we are seeking to calculate how many ways 2 things can be chosen from 5. This is known as ‘5 Choose 2’, written as:

5 C 2= 10

More generally, the number of ways k things can be chosen from n is:

nC k = n! / (n-k)! k!

n! (known as n factorial) = n (n-1) (n-2) … 1

k! (known as k factorial) = k (k-1) (k-2) … 1

Thus, 5C 2 = 5! / 3! 2! = 5x4x3x2x1 / (3x2x1x2x1) = 5×4/(2×1) = 20/2=10

So what is the probability of throwing exactly two ‘6’s in five throws of the die, in each of these ten cases? p is the probability of success. 1-p is the probability of failure.

In each case, the probability = p.p.(1-p).(1-p).(1-p)

= p2 (1-p)3

Since there are 5 C 2 such sequences, the probability of exactly 2 ‘6’s =

10 p2 (1-p)3

Generally, in a fixed sequence of n Bernoulli trials, the probability of exactly r successes is:

nC r x pr (1-p) n-r

This is the binomial distribution. Note that it requires that the probability of success on each trial be constant. It also requires only two possible outcomes.

So, for example, what is the chance of exactly 3 heads when a fair coin is tossed 5 times?

5C 3 x (1/2)3 x (1/2)2 = 10/32 = 5/16

And what is the chance of exactly 2 sixes when a fair die is rolled five times?

5 C 2x (1/6)2 x (5/6)3 = 10 x 1/36 x 125/216 = 1250/7776 = 0.1608

So let’s now use the binomial distribution to solve the Newton-Pepys problem.

  1. What is the probability of obtaining at least one six with 6 dice?
  2. What is the probability of obtaining at least two sixes with 12 dice?
  3. What is the probability of obtaining at least three sizes with 18 dice?

First, what is the probability of no sixes with 6 dice?

P (no sixes with six dice) = n C x . (1/6)x . (5/6)n-x, x = 0,1,2,…,n

Where x is the number of successes.

So, probability of no successes (no sixes) with 6 dice =

n!/(n-k)!k! = 6!/(6-0)!0! x (1/6)0 . (5/6)6-0 = 6!/6! X 1 x 1 x (5/6)6 = (5/6)6

Note that: 0! = 1

Here’s the proof: n! = n. (n-1)!

At n=1, 1! = 1. (1-1)!

So 1 = 0!

So, where x is the number of sixes, probability of at least one six is equal to ‘1’ minus the probability of no sixes, which can be written as:

P (x≥ 1) = 1 – P(x=0) = 1 – (5/6)6 = 0.665 (to three decimal  places).

i.e. probability of at least one six = 1 minus the probability of no sixes.

That is a formal solution to Part 1 of the Newton-Pepys Problem.

Now on to Part 2.

Probability of at least two sixes with 12 dice is equal to ‘1’ minus the probability of no sixes minus the probability of exactly one six.

This can be written as:

P (x≥2) = 1 – P(x=0) – P(x=1)

P(x=0) in 12 throws of the dice = (5/6)12

P (x=1) in 12 throws of the dice = 12 C 1 . (1/6)1 . (5/6)11nC k = n! / (n-k)! k!

So 12 C 1

= 12! / (12-1)! 1! = 12! / 11! 1! = 12

So, P (x≥2) = 1 – (5/6)12 – 12. (1/6) . (5/6)11

= 1 – 0.112156654 – 2 . (0.134587985) = 0.887843346 – 0.26917597 =

= 0.618667376 = 0.619 (to 3 decimal places)

This is a formal solution to Part 2 of the Newton-Pepys Problem.

Now on to Part 3.

Probability of at least three sixes with 18 dice is equal to ‘1’ minus the probability of no sixes minus the probability of exactly one six minus the probability of at exactly two sixes.

This can be written as:

P (x≥3) = 1 – P(x=0) – P(x=1) – P(x=2)

P(x=0) in 18 throws of the dice = (5/6)18

P (x=1) in 18 throws of the dice = 18 C 1 . (1/6)1 . (5/6)17

nC k = n! / (n-k)! k!

So 18 C 1

= 18! / (18-1)! 1! = 18

So P (x=1) = 18.  (1/6)1 . (5/6)17

P (x=2) = 18 C 2 . (1/6)2 .(5/6)16

18 C 2

     = 18! / (18-2)! 2! = 18!/16! 2! = 18. (17/2)

So P (x=2) = 18. (17/2) (1/6)2 (5/6)16

So P(x=3) = 1 – P (x=0) – (P(x=1) – P (x=2)

P (x=0) = (5/6)18

= 0.0375610365

P (x=1) = 18. 1/6. (0.0450732438) = 0.135219731

P (x=2) = 18. (17/2) (1/36) (0.0540878926) = 0.229873544

So P(x=3) = 1 – 0.0375610365 – 0.135219731 – 0.229873544 =

P(x≥3) = 0.597345689 = 0.597 (to 3 decimal places, )

This is a formal solution to Part 3 of the Newton-Pepys Problem.

So, to re-state the Newton-Pepys problem.

Which of the following three propositions has the greatest chance of success?

  1. Six fair dice are tossed independently and at least one ‘6’ appears.
  2. 12 fair dice are tossed independently and at least two ‘6’s appear.
  3. 18 fair dice are tossed independently and at least three ‘6’s appear.

Pepys was convinced that C. had the highest probability and asked Newton to confirm this.

Newton chose A, then B, then C, and produced his calculations for Pepys, who wouldn’t accept them.

So who was right? Newton or Pepys?

According to our calculations, what is the probability of A? 0.665

What is the probability of B? 0.619

What is the probability of C? 0.597

So Sir Isaac’s solution was right. Samuel Pepys was wrong, a wrong compounded by refusing to accept Newton’s solution. How much he lost gambling on his misjudgement is mired in the mists of history. The Newton-Pepys Problem is not, and continues to tease our brains to this very day.

 

References and Links

Newton and Pepys. DataGenetics. http://datagenetics.com/blog/february12014/index.html

Newton-Pepys problem. Wikipedia. https://en.wikipedia.org/wiki/Newton%E2%80%93Pepys_problem

 

Solution: Deadly Doors Problem – in a nutshell.

Solution to Exercise

Question 1. You should switch to either the purple box or the magenta box.

There was a 1 in 4 chance at the outset that your original choice, the red box, contained the prize. This does not change when I open the box which I know to be empty. There was a 3 in 4 chance that it was either the orange box, the purple box or the magenta box before I opened the box and by opening the orange box, which I know to be empty, that can be eliminated. So the chance it is either the purple box or the magenta box is now 3 in 4 in total (or 3/8 each), compared to 1 in 4 for your original choice, the red box.

Question 2. It makes no difference whether you switch or not.

There was a 1 in 4 chance at the outset that your original choice, the black box, contained the prize. There was a 3 in 4 chance that it was either the white box, the grey box or the brown box. By randomly opening a box (I don’t know which box contains the prize), I am giving you no new information. It is the same as asking you to choose a box to open. If you randomly opened the white box, which might have contained the prize, this means there are now two boxes left (grey and brown). Each of these started with a 1 in 4 chance of containing the prize. I have not deliberately eliminated a box potentially containing the prize, so I have given you no new information to indicate which box contains the prize. So the chance of each of the remaining boxes rises to 1/3 in each case. So it makes no difference whether you switch or not.

 

Solution: Monty Hall Problem – in a nutshell.

Solution to Exercise

Question 1. You should switch to the red box.

There was a 1 in 3 chance at the outset that your original choice, the blue box, contained the prize. This does not change when I open the box which I know to be empty. There was a 2 in 3 chance that it was either the red box or the yellow box before I opened the box and by opening the yellow box, which I know to be empty, that can be eliminated. So the chance it is the red box is now 2 in 3, compared to 1 in 3 for your original choice, the blue box.

Question 2. It makes no difference whether you switch or not.

There was a 1 in 3 chance at the outset that your original choice, the green box, contained the prize. There was a 2 in 3 chance that it was either the pink box or the violet box before I opened the box. By randomly opening a box (I don’t know which box contains the prize), I am giving you no new information. It is the same as asking you to choose a box to open. If you randomly opened the pink box, which might have contained the prize, this means there are now two boxes left (green and violet). Each of these started with a 1 in 3 chance of containing the prize. I have not deliberately eliminated a box potentially containing the prize, so I have given you no new information to indicate which box contains the prize. So the chance of both remaining boxes rises to ½ in each case. So it makes no difference whether you switch or not.

Gambler’s Fallacy – in a nutshell.

The Gambler’s Fallacy, also known as the Monte Carlo Fallacy, is the proposition that people, instead of accepting an actual independence of successive outcomes, are influenced in their perceptions of the next possible outcome by the results of the preceding sequence of outcomes – e.g. throws of a die, spins of a wheel. Put another way, the fallacy is the mistaken belief that the probability of an event is decreased when the event has occurred recently, even though the probability of the event is objectively known to be independent across trials.

This can be illustrated by considering the repeated toss of a fair coin. The outcomes of each coin toss are in fact independent of each other, and the probability of getting heads on a single toss is 1/2. The probability of getting two heads in two tosses is 1/4, of three heads in three tosses is 1/8, and of four heads in a row is 1/16. Since the probability of a run of five successive heads is 1/32, the fallacy is to believe that the next toss would be more likely to come up tails rather than heads again. In fact, “5 heads in a row” and “4 heads, then tails” both have a probability of 1/32. Since the first four tosses turn u heads, the probability that the next toss is a head is 1/2, and similarly for tails.

While a run of five heads in a row has a probability of 1/32, this applies only before the first coin is tossed. After the first four tosses, the next coin toss has a probability of 1/2 Heads and 1/2 Tails.

The so-called Inverse Gambler’s Fallacy is where someone entering a room sees an individual rolling a double six with a pair of fair dice and concludes (with flawed logic) that the person must have been rolling the dice for some time, as it is unlikely that they would roll a double six on a first or early attempt.

The existence of a ‘gambler’s fallacy’ can be traced to laboratory studies and lottery-type games (Clotfelter and Cook, 1993; Terrell, 1994). Clotfelter and Cook found (in a study of a Maryland numbers game) a significant fall in the amount of money wagered on winning numbers in the days following the win, an effect which did not disappear entirely until after about sixty days. This particular game was, however, characterized by a fixed-odds payout to a unit bet, and so the gambler’s fallacy had no effect on expected returns. In pari-mutuel games, on the other hand, the return to a winning number is linked to the amount of money bet on that number, and so the operation of a systematic bias against certain numbers will tend to increase the expected return on those numbers.

Terrell (1994) investigated one such pari-mutuel system, the New Jersey State Lottery. In a sample of 1,785 drawings from 1988 to 1993, he constructed a subsample of 97 winners which repeated as a winner within the 60 day cut-off point suggested by Clotfelter and Cook. He found that these numbers had a higher payout than when they previously won on 80 of the 97 occasions. To determine the relationship, he regressed the payout to winning numbers on the number of days since the last win by that number. The expected payout increased by 28% one day after winning, and decreased from this level by c. 0.5% each day after the number won, returning to its original level 60 days later. The size of the gambler’s fallacy, while significant, was less than that found by Clotfelter and Cook in their fixed-odds numbers game.

It is as if irrational behaviour exists, but reduces as the cost of the anomalous behaviour increases.

An opposite effect is where people tend to predict the same outcome as the previous event, resulting in a belief that there are streaks in performance. This is known as the ‘hot hand effect’, and normally applies in the context of human performance, as in basketball shots, whereas the Gambler’s Fallacy is applied to inanimate games such as coin tosses or spins of a roulette wheel. This is because human performance may not be perceived as random in the same way as, say, a coin flip. 

Exercise

Distinguish between the Gambler’s Fallacy, the Inverse Gambler’s Fallacy and the Hot Hand Effect. Can these three phenomena be logically reconciled?

References and Links

Gambler’s Fallacy. Wikipedia. https://en.wikipedia.org/wiki/Gambler%27s_fallacy

Gambler’s Fallacy. Logically Fallacious. https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFallacies/98/Gambler-s-Fallacy

Gambler’s Fallacy. RationalWiki. https://rationalwiki.org/wiki/Gambler%27s_fallacy

Inverse Gambler’s Fallacy. Wikipedia. https://en.wikipedia.org/wiki/Inverse_gambler%27s_fallacy

Inverse Gambler’s Fallacy. RationalWiki. https://rationalwiki.org/wiki/Gambler%27s_fallacy

Hot Hand. Wikipedia. https://en.wikipedia.org/wiki/Hot_hand

Clotfelter, C.T. and Cook, P.J. (1993). Notes: The “Gambler’s Fallacy” in Lottery Play, Management Science, 39.12,i-1553. https://pubsonline.informs.org/doi/abs/10.1287/mnsc.39.12.1521

https://www.nber.org/papers/w3769.pdf

Terrell, D. (1994). A Test of the Gambler’s Fallacy: Evidence from Pari-Mutuel Games. Journal of Risk and Uncertainty. 8,3, 309-317. https://link.springer.com/article/10.1007/BF01064047