A Version of this article is published in my book, ‘TWISTED LOGIC: PUZZLES, PARADOXES, AND BIG QUESTIONS’. CRC Press/Chapman & Hall, 2024. https://www.amazon.co.uk/Twisted-Logic-Puzzles-Paradoxes-Questions/dp/1032513349
A Thought Experiment
Newcomb’s Paradox, also known as Newcomb’s Problem, is a thought experiment involving a choice between two boxes: one transparent containing $1,000 and one opaque that may contain nothing or $1 million. The twist? A highly accurate Predictor has already decided what’s in the opaque box based on what it thinks you will choose. It is a dilemma first proposed by William Newcomb, a theoretical physicist, and popularised by philosopher Robert Nozick.
The Setting of Newcomb’s Paradox
In this setting, the simple choice is being taking both boxes or just taking the opaque box. The Predictor, which is well known for its accuracy, determines the content of the opaque box based on a prediction about your decision. If the Predictor forecasts you will take both boxes, it places nothing in the opaque box. On the other hand, if it predicts that you will only take the one box, a sum of $1 million will be deposited inside. By the time you make your decision, the Predictor’s choice about the opaque box’s content has already been made. So, should we take both boxes or just the one opaque box? You could also change the amounts to update to modern day prices or in some other way and ask yourself whether it changes anything.
Two-Boxers vs. One-Boxers: The Great Debate
Essentially, Newcomb’s Paradox has divided people into two distinct camps, each adhering to a different way of looking at the problem. These factions, known as ‘two-boxers’ and ‘one-boxers’, represent different facets of decision-making theory and reflect different approaches to rational choice.
Two-Boxers: The Dominance Principle and Causal Decision Theory
Two-boxers advocate that the most rational decision is to take both boxes. This perspective aligns with the principle of dominance in decision theory, which states that if one action produces a better outcome than another in every possible scenario, then that action should be chosen. In the case of Newcomb’s Paradox, two-boxers argue that the Predictor’s decision—having already determined the content of the opaque box before you choose—cannot be influenced by your subsequent choice. This means that choosing both boxes can’t make you worse off. In the worst-case scenario, you have the guaranteed $1,000 from the transparent box, and in the best-case scenario, you could walk away with an additional $1 million if the Predictor failed in its prediction.
Two-boxers also fundamentally subscribe to causal decision theory. They reason that since your decision doesn’t cause a change in the already-decided contents of the opaque box, it’s only rational to maximise the guaranteed gains, which means taking both boxes. This standpoint portrays the logic of irreversibility, where past events (the Predictor’s decision) cannot be influenced by future actions (your choice).
One-Boxers: Evidential Decision Theory and Trusting the Predictor
Conversely, one-boxers argue for a different interpretation of rationality. They propose that the sensible decision, given the Predictor’s uncanny accuracy, is to take only the opaque box. They reason that, although the contents of the box have been decided, the Predictor’s ability to forecast accurately makes it likely that the opaque box contains the $1 million if you choose it alone.
One-boxers point to the track record: almost every participant who opted for two boxes found the opaque box empty, while the opposite was true for those who took only the opaque box. Hence, they argue that it’s not about changing the past, but about leveraging the evidence that shows a strong correlation between the decision to pick one box and winning the million dollars.
In essence, one-boxers align with evidential decision theory, which suggests that we should make decisions based on the best available evidence for the desired outcome. In the context of Newcomb’s Paradox, taking only the opaque box is based on what has happened in the past to those who took one box and two boxes, respectively.
In this way, the paradox challenges our notions of causality and rational decision-making. Can our current choice affect a decision that’s already been made? Or does the Predictor’s accuracy mean it’s better to trust the pattern of past outcomes?
Split Decision
The paradox thus splits decision-makers into two groups: ‘two-boxers’ and ‘one-boxers’, each advocating for a different decision based on their own logic.
Two-boxers argue that the rational decision is to take both boxes. As the Predictor’s decision about the content of the opaque box is already determined before you choose, your choice can’t change the contents. This implies that no matter what, you won’t be worse off taking both boxes. The least you can get is the $1,000 from the transparent box, and at best, you could get an additional $1 million if the Predictor predicted incorrectly.
On the other hand, one-boxers argue that the sensible decision, considering the Predictor’s near-perfect track record, is to take only the opaque box. They point out that almost everyone who has taken two boxes has found the opaque one empty, while those who took only the opaque box won the million dollars. Thus, based on the evidence, it seems sensible to become a one-boxer.
The decision-making here presents a fascinating conflict between reason (which seemingly lacks supporting evidence) and evidence (which seemingly lacks rational explanation). It essentially raises the question: should we trust the evidence of a well-documented pattern or rely on the rational logic of decision-making?
Causality: The Predictor and the Future
The first crucial point to clarify in Newcomb’s Paradox is the nature of causality at play. The scenario eliminates any notion of backward causality or retro-causality; your choice does not affect the Predictor’s prior decision nor alter the content of the opaque box. This stipulation aligns with our typical understanding of time’s arrow: the future does not influence the past.
The Predictor’s decision is a genuine prediction and doesn’t involve any time-travelling. It infers your choice before you make it, but it doesn’t ‘react’ to your decision.
Predictability: Unravelling the Accuracy of the Predictor
The Predictor’s high accuracy complicates the decision-making process. If you tend to be a two-boxer, you might think that it’s likely the Predictor has foreseen this and left the opaque box empty. Conversely, if you lean towards one-boxing, you might believe that the Predictor has probably predicted this and filled the opaque box with the million dollars. The paradox then becomes less about the boxes you choose and more of a high-stakes mind game where you try to leverage the Predictor’s uncanny accuracy for your gain.
Identity: The Person You Choose to Be
This leads to another fascinating dimension: the intersection of predictability and identity. If the Predictor can predict your decision based on your inherent nature, then maybe the real strategy lies in manipulating your own disposition to game the system. The question then evolves from ‘which box should you choose?’ to ‘who should you choose to be?’
In essence, if you aspire to secure the million dollars, the optimal strategy might be to become the type of person who would always choose one box. The paradox suggests that by firmly committing to this decision, you make it likely for the Predictor to foresee this choice and fill the opaque box accordingly.
The role of the Predictor, therefore, not only tests our understanding of causality and predictability, but it also nudges us to introspect on the role our identity plays in decision-making. It prompts us to consider the potential power of a self-fulfilling prophecy, where our decision to be a certain ‘type’ of person may indeed lead to the desired outcome. Thus, Newcomb’s Paradox elegantly encapsulates the intricate interplay between causality, predictability, and personal identity in shaping our choices and their consequences.
Conclusion: One Box or Two?
The question remains: why leave the sure $1,000 in the transparent box when the content of the opaque box is already decided? Why not take both? This question is at the heart of Newcomb’s Paradox. The paradox doesn’t necessarily dictate a ‘correct’ decision. Instead, it presents a problem that forces you to rethink rationality, predictability, and decision-making. It also highlights the complexity and paradoxical nature of decision-making when dealing with highly reliable predictors. Ultimately, though, the decision rests with you. Would you take one box or two?
A Version of this article is published in my book, ‘TWISTED LOGIC: PUZZLES, PARADOXES, AND BIG QUESTIONS’. CRC Press/Chapman & Hall, 2024. https://www.amazon.co.uk/Twisted-Logic-Puzzles-Paradoxes-Questions/dp/1032513349
Where Is Everybody?
In the early 1950s, the world was on the cusp of the Space Age, with rapid advancements in rocketry and a growing fascination with outer space. It was a time of optimism and curiosity about the cosmos, fuelled by science fiction and the nascent space programmes. Enrico Fermi, a Nobel Prize-winning physicist known for his work on the Manhattan Project, posed a question during a casual lunch conversation with colleagues, sparking a debate that would extend far beyond that moment. Fermi’s question ‘Where is everybody?’ resonated deeply. It juxtaposed the era’s technological optimism with a sobering, profound mystery. Given the vastness of the universe, why is there no evidence or contact with any extra-terrestrial civilisations?
The Age and Size of the Universe
The age and size of the universe are key aspects of the Fermi Paradox. The universe is approximately 13.8 billion years old, and the Milky Way galaxy, where our solar system resides, is about 13.6 billion years old. By comparison, the Earth is about 4.5 billion years old. This vast timescale implies that if the evolution of life and development of technological civilisations is a common process, there should have been ample time for numerous advanced civilisations to arise in our galaxy alone.
The sheer size of the universe reinforces this idea. The Milky Way is home to an estimated 100 billion to 400 billion stars, many of which are likely to host their own planetary systems due to the prevalence of elements necessary for planet formation. This gives rise to an incredibly large number of potential sites for life.
Recent astronomical discoveries have further accentuated the perplexity of the Fermi Paradox. The launch of modern telescopes like Kepler and TESS has led to the identification of thousands of exoplanets, many of which are in the habitable zone of their stars, and it has only deepened the enigma of the paradox, making the silence in the cosmos even more confounding.
Technological Advancement and Singularity
The concept of technological singularity—a point where technological growth becomes uncontrollable and irreversible—presents a fascinating intersection with the Fermi Paradox. If other civilisations have reached singularity, leading to exponential growth in their capabilities, why is there no evidence of their existence? This discrepancy raises questions about the nature of advanced civilisations and their technological trajectories. If we consider the rapid pace of human technological development, it’s reasonable to think that an extra-terrestrial civilisation, with a head start of even a few thousand years, would have achieved technological feats beyond our comprehension.
Could it be that the very nature of singularity leads civilisations to evolve in ways that are undetectable to us, or perhaps, that the pursuit of singularity inadvertently leads to self-destruction?
Proposed Solutions
The Zoo Hypothesis is one proposed solution to the Fermi Paradox. It suggests that extra-terrestrial civilisations are aware of our existence but have intentionally chosen not to contact us but perhaps to observe us. This could be due to a policy of non-interference, aimed at allowing younger civilisations like ours to develop and evolve independently.
The Great Filter hypothesis proposes that there is a critical barrier or a series of barriers that drastically reduce the probability of intelligent life arising, persisting, and becoming detectable by others. The concept of the Great Filter helps explain the lack of observed extra-terrestrial civilisations by suggesting that one or more critical steps in the development of life or civilisation are extremely unlikely or have a high probability of self-destruction.
A related hypothesis is the Rare Earth Hypothesis, which suggests that while simple life forms might be relatively common, more complex, multicellular organisms are exceptionally rare.
The Transcension Hypothesis offers a different perspective on the Fermi Paradox. It proposes that advanced civilisations might not expand outwards into the cosmos but rather inwards, by miniaturising and compressing their technological and informational systems. As a civilisation advances, it might focus on developing virtual realities, advanced simulations, and artificial intelligence rather than pursuing interstellar travel and communication.
In summary, the Zoo Hypothesis, the Great Filter and Rare Earth hypotheses, and the Transcension Hypothesis represent proposed solutions to the Fermi Paradox. Each offers a distinct perspective on the current lack of observed evidence of intelligent life beyond Earth.
Conclusion: The Search Goes On
A number of hypotheses have been proposed to attempt to solve the Fermi Paradox. Each offers a distinct perspective on the lack of observed evidence of civilisations or intelligent life beyond our planet. These ‘solutions’ explore possibilities such as intentional non-interference, the existence of insurmountable barriers in the development of complex life or civilisation, and the focus on inward technological advancement. While none of these hypotheses provide a definitive answer to the Fermi Paradox, they contribute to the ongoing discussion and encourage further exploration and research in the search for extra-terrestrial intelligence in our galaxy and beyond.
In conclusion, the Fermi Paradox and its related hypotheses serve not only as scientific enquiries but also as philosophical and ethical touchstones for humanity. They encourage us to ponder our existence, our future, and our responsibilities in the cosmic arena. As we continue to explore the universe and search for answers, it is possible that we may ultimately learn more about ourselves than about the cosmos that surrounds us.
This page is devoted to my Dad.
This page is devoted to my Dad.
There has been much discussion of late about data published on 1 November, 2021, by the Office for National Statistics (ONS). It is titled ‘Deaths involving COVID-19 by vaccination status, England: deaths occurring between 2 January and 24 September 2021’.
The raw statistics show death rates in England for people aged 10 to 59, listing vaccination status separately. https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/datasets/deathsbyvaccinationstatusengland
Counter-intuitively, these statistics show that the death rates for the vaccinated in thus age grouping were greater than for the unvaccinated. These numbers have since been heavily promoted and highlighted on social media by anti-vaccine advocates, who use them to argue that vaccination increases the risk of death.
The claim is strange, though, because we know from efficacy and effectiveness studies that COVID-19 vaccines offer strong protection against severe disease. For example, the efficiency and effectiveness of the Pfizer-BioNTech vaccine has been shown to be well over 90% in this regard in the most recent studies. https://www.yalemedicine.org/news/covid-19-vaccine-comparison
Vaccine efficacy of 90% means that you have a 90% reduced risk compared to an otherwise similar unvaccinated person, based on controlled randomised trials, while vaccine effectiveness refers to real-world outcomes. On either measure, vaccines work very well indeed.
So, what’s going on here?
Well, closer inspection of the ONS report reveals that over the period of the study, from January to September 2021, the age-adjusted risk of death involving COVID-19 was 32 times greater among unvaccinated people compared to fully vaccinated people. But hold on! How can we square this with the data from the table listing death rates of those aged 10 to 59 by vaccination status?
For the answer we turn to a classic statistical artefact known as Simpson’s Paradox, which seems to pop up and create misleading conclusions all over the place. https://leightonvw.com/2019/02/14/what-is-simpsons-paradox-and-why-it-matters/
It is a consequence of the way that data is presented.
Essentially, Simpson’s Paradox can arise when observing a feature of a broad, widely drawn group, where there is an uneven distribution of the population within this group, for example by age or vaccination status. Ignorance of the implications of Simpson’s Paradox can generate misleading conclusions, which can be, and in this case are, verydangerous.
The paradox in these particular ONS statistics arises specifically because death rates increase dramatically with age, so that at the very top end of this age band, for example, mortality rates are about 80 times as high as at the very bottom end. A similar pattern is observed between vaccination rates and age. For example, in the 10 to 59 data set more than half of those vaccinated are over the age of 40.
Those who are in the upper ranges of the wide 10 to 59 age band are, therefore, both more likely to have been vaccinated and also more likely to die if infected with COVID-19 or for any other reason, and vice versa. Age is acting, in the terminology of statistics, as a confounding variable, being positively related to both vaccination rates and death rates. Put another way, you are more likely to die in a given period if you are older and you are also more likely to be vaccinated if you are older. It is age that is driving up death rates not the vaccinations. Without the vaccinations, deaths would be hugely greater from COVID-19.
So, what if we divide the 10 to 59 group into smaller age groups?
If we break down the band into narrower age ranges, such as 10 to 19, 20 to 29, 30 to 39, 40 to 49, and 50 to 59, we find that the counter-intuitive headline finding immediately disappears. In each age band, the death rates of the vaccinated are vastly lower than those of the unvaccinated. This also applies in the higher age bands – 60 to 69, 70 to 79, and 80 plus.
Basically, unvaccinated people are much younger on average, and therefore less likely to die.
Yet there are those out there who are more than happy to use these statistics to mislead. The consequence is that many who would otherwise choose to be vaccinated might refuse to do so. In truth, the age-adjusted risk of deaths involving coronavirus (COVID-19) over the first nine months of this year was in fact 32 times greater in the unvaccinated than the fully vaccinated. This is a hugely important statistic, and we must not let statistical manipulation be used to obscure this critical information.The lives of countless people really do depend on us exposing this truth.
Leighton Vaughan Williams, Professor of Economics and Finance at Nottingham Business School. https://www.ntu.ac.uk/staff-profiles/business/leighton-vaughan-williams
Read more in Leighton’s new publication, Probability, Choice, and Reason. https://www.amazon.co.uk/Probability-Choice-Leighton-Vaughan-Williams-ebook/dp/B09DPTVFFR/ref=sr_1_2?keywords=probability+choice&qid=1638207631&qsid=262-7509985-0691032&sr=8-2&sres=3540542477%2C0367538911%2C1294977482%2C1108713505%2C1138715336%2C0521747384%2C0387715983%2C3030486001%2C1444333429%2CB07KC98Z3C%2C0071381562%2C0631183221%2C0816614407%2C1848722834%2C3319820346%2CB07SZLGZYH&srpt=ABIS_BOOK
Much of our thinking is flawed because it is based on faulty intuition. But by using the framework and tools of probability and statistics, we can overcome this to provide solutions to many real-world problems and paradoxes. Further and deeper exploration of paradoxes and challenges of intuition and logic can be found in my recently published book, Probability, Choice and Reason.

When it comes to situations like waiting for a bus, our intuition is often wrong.
Imagine, there’s a bus that arrives every 30 minutes on average and you arrive at the bus stop with no idea when the last bus left. How long can you expect to wait for the next bus? Intuitively, half of 30 minutes sounds right, but you’d be very lucky to wait only 15 minutes.
Say, for example, that half the time the buses arrive at a 20-minute interval and half the time at a 40-minute interval. The overall average is now 30 minutes. From your point of view, however, it is twice as likely that you’ll turn up during the 40 minutes interval than during the 20 minutes interval.
This is true in every case except when the buses arrive at exact 30-minute intervals. As the dispersion around the average increases, so does the amount by which the expected wait time exceeds the average wait. This is the Inspection Paradox, which states that whenever you “inspect” a process, you are likely to find that things take (or last) longer than their “uninspected” average. What seems like the persistence of bad luck is simply the laws of probability and statistics playing out their natural course.
Once made aware of the paradox, it seems to appear all over the place.
For example, let’s say you want to take a survey of the average class size at a college. Say that the college has class sizes of either 10 or 50, and there are equal numbers of each. So the overall average class size is 30. But in selecting a random student, it is five times more likely that he or she will come from a class of 50 students than of 10 students. So for every one student who replies “10” to your enquiry about their class size, there will be five who answer “50”. The average class size thrown up by your survey is nearer 50, therefore, than 30. So the act of inspecting the class sizes significantly increases the average obtained compared to the true, uninspected average. The only circumstance in which the inspected and uninspected average coincides is when every class size is equal.
We can examine the same paradox within the context of what is known as length-based sampling. For example, when digging up potatoes, why does the fork go through the very large one? Why does the network connection break down during download of the largest file? It is not because you were born unlucky but because these outcomes occur for a greater extension of space or time than the average extension of space or time.
Once you know about the Inspection Paradox, the world and our perception of our place in it are never quite the same again.
Another day you line up at the medical practice to be tested for a virus. The test is 99% accurate and you test positive. Now, what is the chance that you have the virus? The intuitive answer is 99%. But is that right? The information we are given relates to the probability of testing positive given that you have the virus. What we want to know, however, is the probability of having the virus given that you test positive. Common intuition conflates these two probabilities, but they are very different. This is an instance of the Inverse or Prosecutor’s Fallacy.
The significance of the test result depends on the probability that you have the virus before taking the test. This is known as the prior probability. Essentially, we have a competition between how rare the virus is (the base rate) and how rarely the test is wrong. Let’s say there is a 1 in 100 chance, based on local prevalence rates, that you have the virus before taking the test. Now, recall that the test is wrong one time in 100. These two probabilities are equal, so the chance that you have the virus when testing positive is 1 in 2, despite the test being 99% accurate. But what if you are showing symptoms of the virus before being tested? In this case, we should update the prior probability to something higher than the prevalence rate in the tested population. The chance you have the virus when you test positive rises accordingly. We can use Bayes’ Theorem to perform the calculations.
In summary, intuition often lets us down. Still, by applying the methods of probability and statistics, we can defy intuition. We can even resolve what might seem to many the greatest mystery of them all – why we seem so often to find ourselves stuck in the slower lane or queue. Intuitively, we were born unlucky. The logical answer to the Slower Lane Puzzle is that it’s exactly where we should expect to be!
When intuition fails, we can always use probability and statistics to look for the real answers.
Leighton Vaughan Williams, Professor of Economics and Finance at Nottingham Business School. Read more in Leighton’s new publication Probability, Choice and Reason.
In a fascinating article published in the New York Times, Malcolm Browne relates how Dr. Theodore Hill would ask his mathematics students to go home and either toss a coin 200 times and record the results, or else pretend that they had done so. Either way, he would ask them to produce for him the results of their (real or imaginary) coin-tossing experiment.
Dr. Hill’s purpose in this experiment was to show just how difficult it is to fake data convincingly. It just isn’t that easy to make up a random sequence. Based on this knowledge, he would astound his students by almost unerringly picking out the fakers from the tossers!
One of the ways he would do this would be to spot how many times heads or tails would be listed six or more times in a row. In real life, this occurrence is overwhelmingly probable in 200 coin throws. To most of his students this long a sequence is counter-intuitive, an example of what is often termed the Gamblers’ Fallacy, i.e. the erroneous perception that independent random sequences will balance out over time, so that for example an extended sequence of heads is more likely to be followed by a tail than a head. The fakers, susceptible to the Fallacy, are thus easily exposed. Ordinary people, even mathematics students, simply can’t help introducing patterns into what is random noise.
This is an example of a broader analysis which is usually referred to a Benford’s Law, which essentially states that if we randomly select a number from a table of real-life data, the probability that the first digit will be one particular number is significantly different to it being a different number. For example, the probability that the first digit will be a ‘1’ is about 30%, rather than the intuitive 10%, which assumes that all digits are equally likely. In particular, Benford’s Law applies to the distribution of leading and trailing digits in naturally occurring phenomena, such as the population of different countries or the heights of mountains. For example, choose a paper with a lot of numbers and circle the numbers that occur naturally, such as stock prices. So lengths of rivers lakes could be included, but not artificial numbers like telephone numbers. 30% or so of these numbers will start with a 1, and it doesn’t matter what units they are in. So the lengths of rivers could be denominated in kilometres, miles, feet, centimetres, without it making a difference to the distribution frequency of the digits.
The empirical support for this proportion can be traced to the man after whom the Law is named, physicist Dr. Frank Benford, in a paper he published in 1938, called ‘The Law of Anomalous Numbers’. In that paper he examined 20,229 sets of numbers, as diverse as baseball statistics, the areas of rivers, numbers in magazine articles and so forth, confirming the 30% rule for number 1. For information, the chance of throwing up a ‘2’ as first digit is 17.6%, and of a ‘9’ just 4.6%. The same principle applies to trailing (i.e. last) digits. It’s a great way, therefore, of checking the veracity of receipts. If, for example, there is an unusual number of trailing digit ‘7’s, there’s a decent chance that the figures are cooked.
To explain the basis of Benford’s Law, take £1 as a base. Assume this now grows at 10% per day.
£1.10, £1.21, £1.33, £1.46, £1.61, £1.77, £1.94, £2.14, £2.35, £2.59, £2.85, £3.13, £3.45, £3.80, £4.18, £4.59, £5.05, £5.56, £6.11, £6.72, £7.40, £8.14, £8.95, £9.84, £10.83, £11.92, £13.11, £14.42, £15.86, £17.45, £19.19, £21.11, £23.22, £25.50, £28.10, £30.91, £34.00, £37.40, £41.14, £45.26, £49.79, £54.74, £60.24, £72.89, £80.18, £88.20, £97.02 …
So we see that the numbers stay a long time in the teens, less in the 20s, and so on through the 90s, and this pattern continues through three digits and so forth. Benford noticed that the probability that a number starts with n = log (n+1) – log (n).
NB log10 1 = 0; log10 2 = 0.301; log10 3 = 0.4771 … log10 10 = 1.
Leading digit Probability
• 1 30.1%
• 2 17.6%
• 3 12.5%
• 4 9.7%
• 5 7.9%
• 6 6.7%
• 7 5.8%
• 8 5.1%
• 9 4.6%
Tax authorities are alert to this, or should be, which should make fraudulent activity just that little bit easier to detect, especially when the fraudster is unaware of the Benford distribution. For all right-minded citizens, we can call that Benford’s Bonus.
Links:
http://www.rexswain.com/benford.html
http://www.jstor.org/pss/984802
Further and deeper exploration of paradoxes and challenges of intuition and logic can be found in my recently published book, Probability, Choice and Reason.
Ask someone to toss a fair coin 32 times. Which of the following rows of coin toss patterns is more likely to result if they actually do toss the coins and record them accurately, and which is likely to be the fake?
HTTHTHTTHHTHTHHTTTHTHTTHTHHTTHHT
OR
HTTHTHTTTTTHTHTTHHHHTTHTHTHHTHHT
In both cases, there are 15 heads and 17 tails.
But would we expect a run (r) of five Heads or a run of five tails in the series, where r is the length of the run?
The chance of five heads = (1/2) to the power of r = (1/2) to the power of 5 = 1/32. But there are 28 opportunities for a run of five heads in 32 tosses. Same for a run of five tails.
A good rule of thumb is that when N (the number of opportunities for a run to take place) x (1/2 to the power of r) equals 1, it is likely that a run of length, r, will appear in the sequence. So, a run of length r is likely to appear when N = 2 to the power of r.
In the case of 32 coin tosses, with 28 possible runs of length five, N (28) is almost equal to 2 to the power of 5 (32). So a run of five heads (or of tails) is likely if a fair coin is tossed randomly 32 times in a row, and a run of four is almost certain.
Now look at the series of coin tosses above. The first series of 32 coin tosses has no run of heads (or tails) longer than three. The second series has a run of five tails and of four heads.
It is very likely indeed, therefore, that the second series is the genuine one, and the first one is the fake.
Appendix
Probability of 5 heads in a row = 1/32.
Probability of NOT getting 5 heads in a row from a particular run of 5 coin tosses = 31/32
Chance of NOT getting 5 heads in a row from 28 runs of five coin tosses = (31/32) to the power of 28 = 41.1%.
Therefore, the probability of getting 5 heads in a row from 28 runs of five coin tosses = 58.9%.
Similarly for tails.
The Probability of 5 heads OR 5 tails in a row = 1/32 + 1/32 = 1/16
Probability of NOT getting 5 heads OR 5 tails in a row from a particular run of 5 coin tosses = 15/16
Chance of NOT getting 5 heads OR 5 tails in a row from 28 runs of five coin tosses = (15/16) to the power of 28 =16.4%.
Therefore, the probability of getting 5 heads OR 5 tails in a row from 28 runs of five coin tosses = 83.6%
Probability of 4 heads in a row = 1/16.
Probability of NOT getting 4 heads in a row from a particular run of 4 coin tosses = 15/16
Chance of NOT getting 4 heads in a row from 29 runs of four coin tosses = (15/16) to the power of 29 = 15.4%.
Therefore, the probability of getting 5 heads in a row from 28 runs of five coin tosses = 84.6%.
Similarly for tails.
Probability of 4 heads OR 4 tails in a row = 1/16 + 1/16 = 1/8
Probability of NOT getting 4 heads OR 4 tails in a row from a particular run of 4 coin tosses = 7/8
Chance of NOT getting 4 heads OR 4 tails in a row from 29 runs of four coin tosses = (7/8) to the power of 29 = 2.1%
Therefore, the probability of getting 4 heads OR 4 tails in a row from 29 runs of four coin tosses = 97.9%
Exercise
When Nasser Hussain was England cricket captain during 200-01, he lost all 14 coin tosses in the international matches he captained. Given that he captained England in all international matches about a hundred times, what was the probability that he would face this long a losing streak during his captaincy?
