THE BUS STOP SCENARIO
Take the case of a bus that arrives, on average, every 20 minutes. It’s not a perfect rule—sometimes the bus arrives early and sometimes it’s late. But, when you calculate all the arrival times, it averages out to three times an hour, or every 20 minutes. Now, picture yourself emerging from a side street to the bus stop, with no idea when the bus last arrived. The question that naturally arises is: how long should you expect to wait for the next bus?
Your initial thought might be, ‘Well, if it’s 20 minutes on average, then I should expect to wait around 10 minutes’. This would be halfway between the average intervals and would indeed be the case if the bus arrivals were perfectly spaced out. However, if you find yourself waiting longer than this, you might start to feel like the world is against you. The question then arises: are you just unlucky, or is something else at play?
This is where we introduce the concept of the Inspection Paradox.
UNRAVELLING THE INSPECTION PARADOX
The Inspection Paradox is a statistical phenomenon that reveals how our expected wait times can differ from the average times we calculate, due to the randomness of our inspections or experiences.
To illustrate this, let’s look deeper into the bus scenario. The bus schedule is not as straightforward as it might seem. Remember, the bus arrives every 20 minutes on average, but not at precise 20-minute intervals. Variability changes things.
UNPREDICTABILITY IN THE BUS SCHEDULE
Consider a situation where half of the time the bus arrives at an interval of 10 minutes, and the other half at an interval of 30 minutes. The overall average remains at 20 minutes, but your experience at the bus stop will differ. If you show up at the bus stop at a random time, it’s statistically more probable that you will turn up during the longer 30-minute interval than the shorter 10-minute interval.
This variation has significant implications for your expected wait time. If you land in the 30-minute interval, you can expect to wait around 15 minutes, half of that interval. If you find yourself in the 10-minute interval, you’ll only wait around 5 minutes on average. However, you’re three times more likely to hit the 30-minute gap, which means your expected wait time skews closer to 15 minutes than 5 minutes. On average, your expected wait time becomes 12.5 minutes, contrary to the intuitive answer of 10 minutes. This is calculated as follows: (3 × 15 + 1 × 5)/4 = 50/4 = 12.5 minutes.
IMPLICATIONS OF THE INSPECTION PARADOX
This surprising realisation is the crux of the Inspection Paradox. It essentially states that when you randomly ‘inspect’ or experience an event without knowing its schedule or distribution beforehand, it often seems to take longer than the average time. This isn’t due to some cosmic force giving you a hard time; it’s simply how probability and statistics operate in the randomness of real life.
Understanding the Inspection Paradox can fundamentally change how you interpret your everyday experiences. It’s not about bad luck but rather about understanding that your perception of averages can be skewed by variability around the average.
EVERYDAY INSTANCES OF THE INSPECTION PARADOX
Once you’re aware of the Inspection Paradox, you might start noticing it in various aspects of your everyday life.
EDUCATION INSTITUTION: AVERAGE CLASS SIZE
Consider an educational institution that reports an average class size of 30 students. Now, if you were to randomly ask students from this institution about their class size, you might find that your calculated average is higher than the reported 30.
Why does this happen?
The Inspection Paradox is at play here. If the institution has a range of small and large classes, you’re more likely to encounter students from larger classes in your random sample. This leads to a bigger average class size in your interview sample compared to the actual average class size.
Say, for example, that the institution has class sizes of either 10 or 50, and there are equal numbers of each. In this case, the overall average class size is 30. But in selecting a random student, it is five times more likely that they will come from a class of 50 students than from a class of 10 students. So for every one student who replies ‘10’ to your enquiry about their class size, there will be five who answer ‘50’. So the average class size thrown up by your survey is 5 × 50 + 1 × 10, divided by 6. This equals 260/6 = 43.3. The act of inspecting the class sizes thus increases the average obtained compared to the uninspected average. The only circumstance in which the inspected and uninspected averages coincide is when every class size is equal.
LIBRARY STUDY TIMES
Consider another scenario where you visit a library and conduct a survey asking the attendees how long they usually study. You might notice that the reported study times are generally higher than you might have expected. This can happen not because of any over-reporting but because the sample of students you survey is skewed towards those who spend longer times studying in the library. The reason is that the longer a student stays in the library, the higher the chance you’ll find them there during your random survey. Short-term visitors are less likely to be part of your sample, skewing the average study time upwards.
THE RESTAURANT AND THE SUPERMARKET
You might think about the implications for other scenarios, such as restaurant wait times or queue lengths at supermarkets. For the reasons we have learned about, we might expect our individual experience of waiting to be that little bit longer than a calculation of the unobserved average.
THE PARADOX IN OTHER REAL-LIFE SCENARIOS
Potato Digging
Why do you often accidentally cut through the biggest potato when digging in your garden? It’s because larger potatoes take up more space in the ground, increasing the likelihood of your shovel hitting them.
Downloading Files
Consider the frustration when your internet connection breaks during the download of the largest file. It’s because larger files take longer to download, increasing the window of time for potential connection issues to arise.
CONCLUSION: A NEW LENS
Understanding the Inspection Paradox equips you with a new lens through which to look at the world. It helps explain why your experiences might often differ from average expectations. It’s simply the laws of probability and statistics unfolding in a world full of randomness. With this knowledge, you can navigate the world with more informed expectations and a greater appreciation for statistical realities.
Exploring the Two Child Paradox
Leighton Vaughan Williams is on BlueSky (leightonvw.bsky.social), Threads (leightonvw) and Twitter (@leightonvw).
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions, by Leighton Vaughan Williams. Chapman & Hall/CRC Press, 2024.
THE BOY OR GIRL PARADOX
The Boy or Girl Paradox, also known as the Two Child Paradox, is a fascinating probability puzzle that challenges our intuitive understanding of probabilities. The paradox revolves around a simple scenario: a family with two children, where one of the children is known to be a boy. The question that arises is: What is the probability that the other child is also a boy? Intuitively, one might assume that the probability is 50%, as there appear to be only two possibilities: a boy or a girl, and we assume that in general a child is equally likely to be a boy or a girl. However, a more detailed analysis reveals that the correct probability is 1/3. To fully grasp the paradox and its implications, let’s dive deeper into the concepts of probability and conditional probability, as well as explore various scenarios and explanations.
ANALYSING THE GENDER COMBINATIONS
To begin our analysis, let’s consider all the possible combinations of genders for the two children. We can denote a boy as B and a girl as G. With these symbols, the four potential combinations of genders are:
Boy–Boy (BB)
Boy–Girl (BG)
Girl–Boy (GB)
Girl–Girl (GG)
It’s important to note that each combination is equally likely, assuming an equal chance of a child being a boy or a girl.
THE PARADOX REVEALED: EVALUATING THE PROBABILITIES
Now, let’s examine each combination and its implications for the Boy or Girl Paradox:
Boy–Boy (BB): This combination represents the scenario where both children are boys. Out of the four possible combinations, BB has a probability of 1/4. It can be achieved in only one way: both children being boys (BB).
Boy–Girl (BG): This combination represents the scenario where the first child is a boy and the second child is a girl. This could be based, for example, on the order in which they were born. Like BB, the BG combination also has a probability of 1/4.
Girl–Boy (GB): Similar to the BG combination, this combination also has a probability of 1/4.
Girl–Girl (GG): This combination represents the scenario where both children are girls. Out of the four possible combinations, GG has a probability of 1/4. It can be achieved in only one way: both children being girls (GG).
CONDITIONAL PROBABILITY AND THE RESOLUTION OF THE PARADOX
So, one of the two children is known to be a boy. Out of the three remaining possibilities (BB, BG, and GB), only one combination (BB) has both children being boys. Therefore, the probability of the other child being a boy is 1/3. This means that in scenarios where we know one child is a boy, the probability of the other child being a boy is 1/3, not the intuitive 1/2. The paradox arises from the fact that we often overlook the distinction between the BG and GB scenarios, treating them as a single outcome. In fact, they represent two distinct possibilities. The Boy or Girl Paradox serves as a reminder of the importance when solving probability problems of carefully analysing the given information, considering all possible outcomes, and questioning our assumptions.
EXPLORING DIFFERENT SCENARIOS AND EXPLANATIONS
To gain a deeper understanding, let’s explore the Boy or Girl Paradox from different perspectives and scenarios. This will help solidify our understanding of conditional probability and shed light on why the intuitive answer of 1/2 is incorrect.
SCENARIO 1: IDENTIFYING THE BOY
Imagine you meet a man at a conference who mentions his two children and reveals that one of them is a boy. What is the likelihood that his other child is a girl? Most people would intuitively assume the probability is 1/2, but it is actually 2/3. The key to understanding this lies in the fact that we do not have information about which child, the older or the younger, is the boy. If the man had specified that the older child is a boy, then the probability would indeed be 1/2. However, since we don’t have that specific information, the probability changes.
To illustrate this, let’s consider the possible combinations of genders when we know one child is a boy:
Older Child: Boy – Younger Child: Boy (BB)
Older Child: Boy – Younger Child: Girl (BG)
Older Child: Girl – Younger Child: Boy (GB)
In this scenario, options 2 and 3 are equally likely. Therefore, there is a 2/3 probability that the other child is a girl, as only one out of the three possibilities (BB) has both children being boys.
SCENARIO 2: DIFFERENTIATING BETWEEN CHILDREN ALTERS PROBABILITY OUTCOMES
Any method allowing us to differentiate between one boy and another, or one girl and another, changes the probabilities. For example, if we are told that the older child is a boy, we can eliminate option 3, leaving just options 1 and 2. In this case, the probability is 1/2 that the other child is a girl, not 2/3.
Using the same logic, suppose a different scenario in which you meet a man in the park with his son and find out that he has two children, but nothing else. Well, in this case, there are only two possibilities:
Boy in the park—Girl at home
Boy in the park—Boy at home
Clearly, the probability that the other child (the child at home) is a girl now becomes 1/2.
In this case, it is location (the boy is in the park, the other child is not) rather than order of their birth that is the distinguishing characteristic.
APPLYING THE SAME CONCEPT TO A COIN TOSS
This scenario can be equated to having two coins and knowing that at least one of them is heads up. So, what’s the probability of the other coin also being heads? With two coins, four outcomes are possible: Heads—Heads, Heads—Tails, Tails—Heads, Tails—Tails. After learning that at least one of the coins is Heads, we can discount the Tails—Tails possibility. We’re left with three equally likely scenarios: two of these contain a Tails in the binary pair and one contains a Heads. Consequently, the likelihood that the other coin is Tails is 2/3. If, on the other hand, we are told that the first of two coins has landed heads up, what is now the chance that the second coin will land tails up? Now, it’s 1/2. By introducing a distinguishing feature, such as the first child that was born or the first coin that was tossed, we change the conditional probability.
GIRL NAMED FLORIDA SCENARIO
Suppose instead we learn that one of the girls is named Florida, which is a good discriminating characteristic. How does this additional information affect the probability of the other child being a boy? Let’s explore this scenario.
If you identify one of the children, say a girl named Florida, only two of the following four options exist:
Boy, Boy
Girl named Florida, Girl
Girl named Florida, Boy
Girl not named Florida, Boy
In this case, the name serves as the discriminating characteristic instead of order of birth, say, or location. Options 1 and 4 can be discarded in this scenario, leaving Options 2 and 3. In this case, the chance that the other child is a girl (almost certainly not named Florida) is 1 in 2. Similarly, the chance that the other child is a boy is also 1 in 2.
This example demonstrates how additional specific information, notably identification of a discriminating characteristic of some kind, can impact the probabilities.
VARIATIONS OF THE PARADOX
The Boy or Girl Paradox is sensitive, therefore, to the context of the problem, which can impact the solution. Subtle changes in this can lead to different solutions. It is for this reason crucial to understand the precise context and conditions when evaluating probability problems.
For example, consider two variations of the initial problem:
Variation 1: ‘Mr. Smith has two children, and one of them is a boy—that’s all you know. What is the probability that the other is also a boy?’ In this case, the correct answer would be 1/3.
Variation 2: ‘Mr. Smith has two children, and you see one of them, who is a boy. What is the probability that the other is also a boy?’ In this case, the correct answer would be 1/2. By physically observing a boy, we gain additional information that distinguishes between the Boy–Girl and Girl–Boy combinations, leading to different probabilities. In this case, location is the distinguishing characteristic.
These variations highlight the importance of understanding the precise context of the problem to arrive at the correct solution.
CONCLUSION: REAL-LIFE APPLICATIONS AND IMPORTANCE
While the Boy or Girl Paradox is a theoretical puzzle, it offers valuable insights into real-world situations involving probabilities. In particular, the paradox serves as a reminder that we must be cautious when interpreting probabilities in real-life situations. It emphasises the importance of carefully considering the context, available information, and potential biases that could influence our judgment. By developing a strong foundation in probability theory, critical thinking skills, and understanding conditional probabilities, we can make more informed decisions, minimise risks, and optimise outcomes in both personal and professional contexts.
Exploring the Bertrand’s Box Dilemma
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions, by Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
Leighton is on BlueSky (leightonvw.bsky.social), Threads (leightonvw), and Twitter (@leightonvw).
THE GAME AND THE PUZZLE
The Bertrand’s Box Paradox, first posed by mathematician Joseph Bertrand, offers a fascinating challenge to our intuitive grasp of probability.
In Bertrand’s scenario, there are three indistinguishable boxes. Each is closed. The first box contains two gold coins, while the second box holds two silver coins. The third box contains one gold and one silver coin. This setup paves the way for an exploration of probability and decision-making that might seem to challenge common sense.
THE CHOICE AND THE IMPLICATION
Imagine yourself in this setting. You randomly select one of these boxes and, without looking, you take one of the two coins from that box. As you open your hand, you see a shiny gold coin resting in your palm.
Now, the presence of the gold coin means that you didn’t select the box containing the two silver coins. Thus, the box in front of you must either be the one containing two gold coins or else it is the one containing one gold and one silver coin. With this information, what is the probability that the other coin in the box is also gold?
THE INTUITIVE ANSWER AND THE SURPRISE
At first glance, the problem appears simple. Having excluded the box containing the two silver coins, we are left with two possible boxes: a box with two gold coins, and a box with one gold and one silver coin. Based on this information, we might presume that the likelihood of each box being the one we randomly selected should be equal. This presumption would lead us to the intuitive conclusion that the chance the other coin is gold stands at 1/2. Likewise, the chance that it is silver would also be 1/2. But is this intuition correct?
In fact, the truth diverges from this intuitive explanation. The correct answer to the probability that the other coin is gold is not 1/2, but 2/3. This outcome might seem to defy common sense. How could merely examining one coin influence the composition of the remaining concealed coin?
THE REVELATION AND THE TRUE ANSWER
To solve this puzzle, we need to look deeper into the details. To do so, let’s imagine that each coin in the boxes has a unique label. In the gold coin box, we have Gold Coin 1 and Gold Coin 2. In the mixed box, there’s Gold Coin 3 and Silver Coin 3, while the silver box holds Silver Coin 1 and Silver Coin 2.
When we initially drew a gold coin from our chosen box, three equally likely events could have occurred. We could have drawn Gold Coin 1, Gold Coin 2, or Gold Coin 3. We remain unaware of which specific gold coin we hold, but the outcomes for the remaining coin in the box vary based on this choice. If we had picked Gold Coin 1 or Gold Coin 2, the remaining coin in the box would also be gold. So, there are two chances it would be gold. However, if it was Gold Coin 3, the other coin in the box would be silver. This is one chance compared to the two chances it is gold.
When we consider these equally likely scenarios, the probability that the other coin is gold stands at 2/3, whereas the probability that it’s silver is 1/3. A seemingly simple choice of coin selection reveals in this way a solution that seems to challenge our intuitive understanding of probability.
THE IMPACT OF NEW INFORMATION
Before we drew the gold coin, the probability that we had chosen the box with two gold coins was 1/3. But when we uncovered the gold coin, we didn’t merely exclude the box with two silver coins, we also gathered new information. Specifically, we could have drawn a silver coin if our selected box was the one with mixed coins, yet we drew a gold coin. This fresh piece of information now means that it is twice as likely that we chose the box with two gold coins rather than the mixed one, because there were two ways this could have happened, compared to just one way if we had selected the mixed box.
A BRIEF SUMMARY OF THE BOX PARADOX
Imagine there are three boxes:
Box 1 contains two gold coins.
Box 2 contains one gold coin and one silver coin.
Box 3 contains two silver coins.
Initially, without any further information, the probability of choosing any one of the boxes is 1/3.
When you draw a coin and see that it is gold, you’re effectively eliminating Box 3 (the box with two silver coins) from consideration because it cannot possibly be the box you chose. This leaves you with Box 1 and Box 2 as possibilities.
However, the key insight is in how we update our probabilities based on the new information that the drawn coin is gold:
For Box 1 (with two gold coins), there are two chances of drawing a gold coin (since both coins are gold).
For Box 2 (with one gold and one silver coin), there is only one chance of drawing a gold coin.
Therefore, given that you have drawn a gold coin, it is twice as likely that you have chosen Box 1 compared to Box 2. This updates the probabilities to:
2/3 chance that the box chosen was Box 1 (with two gold coins).
1/3 chance that the box chosen was Box 2 (with one gold and one silver coin).
CONCLUSION: THE KEY LESSON
The paradox of Bertrand’s Box serves to remind us of the nuanced nature of probability, and illustrates the central importance of incorporating new information into probability calculations. Ultimately, though, it highlights the deceptive character of common intuition, encouraging us to challenge our feelings with the power of reason.
The Deadly Doors Dilemma
THE GAME OF DESTINY: CHOOSING BETWEEN FOUR DEADLY DOORS
Welcome to a dark game of chance and destiny, featuring four distinctive doors coloured red, yellow, blue, and green. Three of these gateways lead to an instant, dusty demise, while the remaining one offers a golden path to fame and fortune. The destiny of each door is randomly assigned by the host, who picks out four coloured balls from a bag—red, yellow, blue, and green. This random process determines the fate that each door offers.
THE INITIAL CHOICE AND ODDS OF SURVIVAL
Suppose you find yourself drawn to the red door. Given the game’s rules, your chance of picking the lucky door and moving onto a path of wealth and glory stand at just one in four, or 25%. Conversely, the unnerving possibility of your choice leading to a dusty doom looms large, with a daunting chance of three in four, or 75%. This calculation comes directly from the fact that out of the four doors, only one leads to fortune, while the other three lead to an unwelcome demise.
A TWIST IN THE TALE: THE HOST’S REVEAL
But the game involves a twist: the host, who knows where each door leads, opens one of the remaining doors. In this case, he reveals the yellow door to be one of the deadly ones. This is a part of the game’s rule—the host must open a door after the initial choice, revealing one of the deadly doors while leaving the lucky door unopened.
THE PIVOTAL DECISION: TO SWITCH OR NOT TO SWITCH
With one door opened and its deadly fate exposed, you face a critical decision. Would you stick with your original choice, the red door, or change your fate by choosing either the blue or green door? This predicament is an extension of the classic three-door Monty Hall Problem, which we can term ‘Monty Hall Plus’, but the underlying logic is exactly the same.
THE COMMON MISCONCEPTION: MISUNDERSTANDING PROBABILITIES
Intuition might suggest that with one door less in the equation, the chance of the red door leading to fortune must have improved. After all, now there are only three doors left—the red, blue, and green. If we assume each door is now equally likely to be the lucky one, the probability of each would be one in three.
ANOTHER REVEAL, ANOTHER DEATH TRAP
However, the host has yet to finish his part. He proceeds to open another door, unveiling the blue one this time, which again turns out to be a death trap. Now, with only two doors remaining—the red and green—the odds seem to have further improved, right? The likelihood of each door leading to fortune should now stand at a clear 50-50, or does it? Does it matter if you stick with your original choice or switch to the remaining door?
THE COUNTERINTUITIVE TRUTH: WHY THE INITIAL CHOICE MATTERS
Contrary to intuitive reasoning, the answer is a resounding yes; it does matter if you stick or switch. The reason for this lies in the fact that the host knows what lies behind each door. When you initially chose the red door, your odds of it leading to fame and fortune were 25%. These odds remain unchanged if you persist with your original choice, regardless of which doors the host reveals subsequently.
THE VALUE OF INFORMATION: HOW THE HOST’S ACTIONS ALTER THE ODDS
Here lies the crux of the game—the host’s actions, since they are informed, change the probabilities associated with the remaining doors. Before the host opened the yellow door, there was a 75% chance that the fortunate door was one among the yellow, blue, or green doors. But now, with the yellow door revealed as deadly, the same 75% probability now gets distributed to the remaining doors, i.e. the remaining (blue and green) doors.
THE FINAL REVEAL: GREEN—THE FINAL OPTION
As the host opens the blue door, unveiling yet another deadly fate, the odds shift again. The chance of the green door being the fortunate one grows further, given that it is now the only door standing against your initial choice, the red door. Therefore, you could either stick with your original choice and hold onto the 25% chance of survival, or switch to the green door, enhancing your odds to a favourable 75%. Essentially, the combined probability of the doors not initially chosen (which was originally 3/4) now heavily favours the last unopened door (since two of three potential safe doors have been eliminated).
CONCLUSION: THE IMPLICATION OF KNOWLEDGE
This dynamic interplay of choices and probabilities is a result of the host’s knowledge about what lies behind each door. The host’s actions introduce new information into the game and influence the probability associated with the remaining unopened doors. The odds change because the host, knowing the outcomes, will never inadvertently reveal the lucky door. However, if the host didn’t possess this knowledge and the doors were revealed randomly, the game would lose its strategic aspect and boil down to sheer luck. In such a scenario, if two doors remain, the chances would be a clear 50-50, making a coin toss as effective a decision-making tool as any others.
A Shakespearean Puzzler
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions, by Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
THE THREE CASKETS PROBLEM
The narrative of William Shakespeare’s ‘Merchant of Venice’ contains intrigue around the character of the young heiress Portia. Amid the various plot developments, one of the more fascinating elements of the story lies in a puzzle set for anyone seeking her hand in marriage. Three caskets made of gold, silver, and lead each contain a different item. Only one holds the prize, a miniature portrait of Portia which symbolises the route to her heart. Portia alone knows that the portrait’s true location is in the lead casket.
SUITORS AND THE CRYPTIC CASKETS: UNRAVELLING THE PECULIAR TEST
As the story unfolds, we learn that to claim Portia in holy wedlock a suitor must choose the casket that houses her portrait. Each casket comes engraved with a cryptic inscription, adding a layer of interest and sophistication to the task.
THE ALLURING GOLD: THE FIRST SUITOR’S TEST
The Prince of Morocco steps forward to face this intriguing test. He is confronted with the inscriptions on the caskets, each one at least as cryptic as the others. Drawn by the promise of desire inscribed on the gold casket, ‘Who chooseth me shall gain what many men desire’, he chooses it, hoping to find ‘an angel in a golden bed’. His dreams are shattered when he finds a skull and a cryptic scroll, instead of the image of Portia. The message on the scroll serves as a harsh reminder, ‘All that glisters is not gold’. With a heavy heart, he retreats, leaving Portia with a sigh of relief, uttering, ‘A gentle riddance’.
SILVER’S DECEPTION: THE SECOND SUITOR’S TURN
Emboldened by his self-worth, and unaware of which casket his predecessor had chosen, the Prince of Arragon interprets the inscription on the silver casket, ‘Who chooseth me shall get as much as he deserves’, as a validation of his worthiness. He selects this casket.
ADDING COMPLEXITY: THE PUZZLE TAKES A TWIST
Now let’s indulge in a thought experiment by introducing an intriguing layer to this complex puzzle. Suppose that, after Arragon’s selection of the silver casket, Portia must open one of the remaining caskets without revealing the portrait’s location. She must, therefore, open the gold casket, which she knows does not contain her likeness. This presents Arragon with the opportunity to hold to his initial choice, the silver casket, or switch to the remaining, unopened casket made of lead.
WEIGHING THE ODDS: ARRAGON’S PROBABILITY PARADOX
If Arragon believes that Portia’s knowledge of the caskets is equal to his, should he stick with his initial choice or take a chance on the unopened lead casket? His decision is far from straightforward, hinging on his interpretation of the cryptic inscriptions, his understanding of the shifting probabilities, and his perception of Portia’s actions.
THE PROBABILITY PUZZLE: DECIPHERING THE GAME OF CHANCE
To understand the implications of the new development, we must first delve into the realm of probability. At the outset, Arragon’s initial choice, the silver casket, had a one-third chance of being correct, assuming he has no other information. There is, therefore, a two-thirds probability that the portrait lay in one of the other two caskets.
Portia’s revelation that the gold casket doesn’t contain the portrait effectively shifts these odds if we can assume that she knows which of the caskets contains her portrait, and must not reveal it. The two-thirds chance, which was initially split between the gold and lead caskets, now converges entirely on the lead casket. Consequently, if Arragon changes his choice from the silver casket to the lead one, his probability of finding Portia’s portrait doubles from one-third to two-thirds, other things being equal.
FATEFUL DECISION: TO SWITCH OR NOT TO SWITCH
If he dismisses the inscriptions as mere distractions and recognises the probability shift in favour of the lead casket, then switching seems like the most rational move. However, if he believes that he has deciphered the true meaning of the inscriptions, he might decide to stick with his original choice.
ARAGON’S DECISION: TO OPEN THE SILVER CASKET
Arragon is either unaware of the true probabilities or else is swayed by the cryptic clues. He chooses the silver casket. However, it only harbours disappointment. Instead of Portia’s portrait, he discovers an image of a fool and a note mocking his decision, ‘With one fool’s head I came to woo, But I go away with two’. His self-confidence leads to his downfall, leaving him more foolish than when he first arrived.
THE POWER OF THE INSCRIPTIONS: GUIDE OR DISTRACTION?
The inscriptions on the caskets add an extra layer of uncertainty and complexity to Arragon’s decision-making process. They could be seen as guides leading the suitors to the correct choice, or they could be deceptive distractions meant to confuse and mislead. The inscription on the lead casket, ‘Who chooseth me must give and hazard all he hath’, could be perceived as a warning of the risks involved or as a subtle hint about the potential rewards of choosing what appears to be the least valuable casket.
CONCLUSION: THE POWER OF INFORMATION
In this thought experiment, the key element is the new information introduced by Portia when she opens the gold casket. After all, she knows where the portrait is. This single action has the potential to increase significantly Arragon’s chance of success. If Arragon understands and acts upon this new information, he can potentially improve his chances of selecting the correct casket from one in three to two in three. However, this seemingly simple shift in probability is complicated by the presence of other potentially influential factors, such as the cryptic inscriptions on the caskets. This makes the problem different from the basic Monty Hall decision.
He might also believe that Portia has no idea which casket contains the portrait. In that case, by opening the gold casket, she would be adding no information to what Arragon already has. He may as well be guided by any additional information he thinks he might pick up from the cryptic inscriptions. Either way, he faces a lonely but life-altering decision.
When Should We Expect Mercy?
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
THE SETTING
The setting is a prison where three inmates—Amos, Bertie, and Casper—are awaiting the hangman’s noose. The warden, in an act of clemency to celebrate the King’s birthday, will grant clemency to just one of the three prisoners. The choice of which of these inmates to pardon is made randomly, with each name placed in a hat and drawn out. The warden now knows whom he will pardon, but the men on death row do not.
THE REQUEST
Amos makes a request to the warden. He asks the warden to name a prisoner who will NOT be pardoned, without revealing his (Amos’s) own fate. If the warden has chosen Bertie to be granted clemency, he should name Casper as one of the doomed. If it’s Casper who has been pardoned, the warden should name Bertie to be executed. If Amos himself is to be pardoned, the warden should simply toss a coin and name either Bertie or Casper as one of the doomed.
Amos’s Request: It’s essential to note here that Amos’s request is based on the assumption that the warden will not reveal if Amos himself is the pardoned prisoner.
THE WARDEN’S RESPONSE
The warden agrees to the request from Amos and reveals that Casper is not the pardoned prisoner.
WHAT DOES THIS MEAN FOR AMOS AND BERTIE?
With this new information to hand, each of the prisoners can re-evaluate their chances. Initially, Amos believes that his chance of a pardon is 1/3, but with Casper out of the running, he believes that his odds of clemency have risen to 1/2. But is he right in this belief?
RE-EVALUATING THE ODDS
Initially, the odds are 1/3 for each prisoner because only one of the three prisoners is chosen at random to be pardoned. However, when the warden reveals that Casper will not be pardoned, Amos gains new information but not about his own fate. There’s no new information regarding his own fate, so his chances remain as they were, at 1/3. Meanwhile, Bertie’s odds of being pardoned have now increased to 2/3.
WHY ARE THE ODDS DIFFERENT?
This difference in the odds between Amos and Bertie might seem to be counterintuitive. How can they both receive the same information, yet have different survival odds? The answer lies in the warden’s selection process. The warden would not have revealed Amos as the condemned prisoner due to Amos’s unique request, but he might have revealed Bertie as such, instead of Casper. The fact that he doesn’t name Bertie when he might have done so indicates that Bertie’s chances of being pardoned have increased, while nothing has changed for Amos. Amos’s belief that his odds have increased to 1/2 is a misconception.
A LARGER SCENARIO
If this still seems puzzling, consider a larger group of 26 prisoners. If Amos asks the warden to name 24 condemned prisoners in random order without revealing his own fate on any occasion, each prisoner initially has a 1/26 chance of being pardoned. But every time a doomed prisoner is named, the chance that each of the remaining prisoners (except for Amos) will be pardoned increases.
Once every prisoner but Bertie has been named as condemned, Amos’s chances of survival remain at 1/26. However, Bertie’s odds of being pardoned have now increased to 25/26, even though only two prisoners remain unnamed by the warden.
So, even though it might seem like Amos has a very good chance of being pardoned, the reality is that his odds have not changed and remain at 25/1, representing a chance that Amos will escape the noose of 1/26.
CONCLUSION: THE KEY TAKEAWAY
The Three Prisoners Problem highlights the importance of understanding the method by which we obtain information and its impact on the probabilities. It’s a fascinating exploration of conditional probability that shows how the same piece of information can affect the chances of two individuals differently, based on the process by which that information was revealed. As such, it is a classic example of how counterintuitive probability can be, especially in situations where information is revealed in a conditional manner.
When Should We Change Our Mind?
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams, Chapman & Hall/CRC Press. 2024.
THE GENESIS OF THE MONTY HALL PROBLEM
The Monty Hall Problem was named after the original host of the American game show, ‘Let’s Make a Deal’. It became a topic of popular debate because of the answer provided to a question quoted in a column in Parade magazine.
Thanks for reading Twisted Logic! Subscribe for free to receive new posts and support my work.
Subscribed
The concept is that contestants are given a choice of three doors. Behind one door lies a highly desirable prize like a car, while behind the other two doors were much less desirable prizes like goats. The car is placed randomly behind one of the doors, preventing contestants from predicting its location based on prior observations or information.
THE PUZZLE UNVEILED
Imagine yourself on this game show. You are asked to choose one of three doors (let’s call them Doors 1, 2, and 3). After making your choice (let’s say you choose Door 1), the host, who knows what’s behind each door, opens another door (for instance, Door 3) to reveal a goat.
He then offers you a choice. You can stick with your original decision (Door 1 in this case), or you can switch to the remaining unopened door (Door 2). You should note that the host always opens a door that you didn’t choose and that hides a goat, increasing the suspense and making the game more interesting.
The question that the Monty Hall problem asks is: Should you stick with your original choice, or should you switch to the other unopened door?
THE COUNTERINTUITIVE ANSWER
At first glance, it might seem like your odds of winning the car are the same whether you stick to your original choice or switch. After all, there are only two doors left unopened, so isn’t there a 50% chance that the car is behind each of them?
In her column, Marilyn Vos Savant argued that the chance is not 50% either way, but that you have a higher chance of winning the car if you decide to switch doors. Despite receiving numerous objections from readers, including some leading academics, her answer holds up under scrutiny. Here’s why.
When you first choose a door, there is a 1 in 3 chance that it hides the car. This means that there’s a 2 in 3 chance that the car is behind one of the other two doors. Even after the host opens a door to reveal a goat, these probabilities do not change. Monty is simply providing more information about where the car is not.
So, if you stick with your original choice, your chances of winning the car remain at 1 in 3. However, if you switch, your chances increase to 2 in 3. Switching doors effectively allows you to select both of the other doors, doubling your odds of finding the car.
A CLOSER LOOK AT THE PROBABILITIES
Let’s examine the situation more closely to understand how this works.
If the car is behind Door 1, and you choose it and stick with your choice, you win the car. The chance of this happening is 1/3.
If the car is behind Door 2 and you initially choose Door 1, the host will open Door 3 (since it conceals a goat). If you switch to Door 2, you win the car. The chance of this happening is 2/3.
If the car is behind Door 3, and you initially choose Door 1, the host will open Door 2 (since it conceals a goat). If you switch to Door 3, you win the car. The chance of this happening is also 2/3.
From the above, you can see that you have a 2/3 chance of winning if you switch to whichever door Monty has not opened, and a 1/3 chance of winning if you stick to your initial choice.
THE ROLE OF THE HOST
It’s crucial to note that the host’s knowledge and actions play a pivotal role in these probabilities. If the host didn’t know what was behind each door or randomly chose a door to open, then the odds would indeed be 50–50, as he might have inadvertently opened a door to reveal the car. However, because the host always opens a door you didn’t choose and always reveals a goat, the odds shift in favour of switching doors.
To expand upon this, consider a version of the problem with 52 cards. This time, you’re invited to choose one card from a deck of 52. The objective is to select the Ace of Spades from a deck of cards lying face down on the table.
If you initially choose the Ace of Spades and stick with your choice, you win the game. The chance of this happening is 1/52, since there’s only one Ace of Spades in a 52-card deck.
However, if you initially choose any card other than the Ace of Spades (which has a 51/52 chance), the host, knowing where the Ace of Spades is, will begin to turn cards over one at a time, always leaving the Ace of Spades and your initial card choice in the remaining face-down deck. The host will continue to do this until only your card and one other card remain. One of these two cards will be the Ace of Spades.
At this point, there is still a 1/52 chance that your original card is the Ace of Spades. If you switch your choice to the remaining card, the chance that it will be the Ace of Spades is therefore 51/52, which is a much higher probability than if you stick with your initial choice.
This works because the host each time deliberately turns over a card that is not the Ace of Spades. So the other card left face down at the end is either the Ace of Spades, with a chance of 51/52, or else your original choice is the Ace of Spades, with a probability of 1/52.
If the host doesn’t know where the Ace of Spades is located, he might inadvertently reveal it every time he turns a card over, so he would be providing no new information about the location of the Ace of Spades by exposing a card.
This shows how the Monty Hall problem can scale to larger numbers. The initial odds of choosing the Ace of Spades are 1/52, but if you switch your choice after the host takes away all but one of the other cards, your odds improve dramatically to 51/52. This is a counterintuitive result, but it follows from the fact that the host’s actions (because he knows where the Ace of Spades is) give you additional information about where the Ace of Spades is not.
OVERCOMING INTUITION WITH LOGIC
The Monty Hall problem can be difficult to grasp because it seems to contradict our intuition. The human brain tends to simplify complex situations, and when there are two unopened doors, it’s easy to fall into the trap of assuming there’s a 50% chance of winning either way. However, the Monty Hall problem highlights how understanding probability requires careful thought and a logical analysis of the situation.
EXPLORING THE MONTY HALL PROBLEM WITH SIMULATIONS
If you’re still having trouble grasping the Monty Hall problem, you might find it helpful to see it in action. Numerous online simulators let you play the Monty Hall game repeatedly, and over time, you’ll see that switching doors indeed wins about 2/3 of the time.
THE MONTY HALL PROBLEM IN POPULAR CULTURE
The Monty Hall problem has seeped into popular culture, appearing in films, television series, and even songs. It serves as a reminder that intuition and probability sometimes have a complicated relationship. The logical and statistical reasoning involved in this puzzle, as well as its seemingly paradoxical result, have made it a favourite topic in probability and statistics classes across the world.
CONCLUSION: PROBABILITY AND INTUITION
The Monty Hall problem is a captivating illustration of how probability can sometimes be counterintuitive. Although it’s been debated, analysed, and confirmed many times over, it continues to intrigue and perplex. It provides a clear lesson: intuition isn’t always reliable when it comes to probability.
Exploring the Martingale Betting Strategy
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman and Hall/CRC Press. 2024.
The Martingale betting strategy is based on the principle of chasing losses through progressive increase in bet size. To illustrate this strategy, let’s consider an example: A gambler starts with a £2 bet on Heads, with an even money payout. If the coin lands Heads, the gambler wins £2, and if it lands Tails, they lose £2.
In the event of a loss, the Martingale strategy dictates that the next bet should be doubled (£4). The objective is to recover the previous losses and achieve a net profit equal to the initial stake (£2). This doubling process continues until a win is obtained. For instance, if Tails appears again, resulting in a cumulative loss of £6, the next bet would be £8. If a subsequent Heads occurs, the gambler would win £8, and after subtracting the previous losses (£6), they would be left with a net profit of £2. This pattern can be extended to any number of bets, with the net profit always equal to the initial stake (£2) whenever a win occurs.
CHASING LOSSES AND THE LIMITATIONS
While the Martingale strategy may appear promising in theory, it is important to recognise its limitations and the inherent risks involved. The strategy involves chasing losses in the hope of recovering them and generating a profit. However, it’s crucial to understand that the expected value of the strategy remains zero or even negative.
The main reason behind this lies in the presence of a small probability of incurring a significant loss. In a game with a house edge, such as in a casino, the odds contain an edge against the player. The house edge ensures that, over time, the expected value of the bets is negative. Therefore, even with the Martingale strategy, which aims to recover losses, the expected value of the bets remains unfavourable.
Moreover, in a casino setting, there are structural limitations that impede the effectiveness of the Martingale strategy. Most casinos impose limits on bet size. These limits prevent gamblers from doubling their bets indefinitely, even if they have boundless resources and time, thereby constraining the strategy’s potential for recovery.
THE DEVIL’S SHOOTING ROOM PARADOX
A parallel thought experiment known as the Devil’s Shooting Room Paradox adds an intriguing twist. In this scenario, a group of people enters a room where the Devil threatens to shoot everyone if he rolls a double-six. The Devil further states that over 90% of those who enter the room will be shot. Paradoxically, both statements can be true. Although the chance of any particular group being shot is only 1 in 36, the size of each subsequent group in this thought experiment is over ten times larger than the previous one. Thus, when considering the cumulative probability of being shot across multiple groups, it surpasses 90%.
Essentially, the Devil’s ability to continually usher in larger groups, each with a small probability of being shot, ultimately results in the majority of all the people entering the room being shot.
A key assumption underlying the Devil’s Shooting Room Paradox is the existence of an infinite supply of people. This assumption aligns with the concept of infinite wealth and resources often associated with Martingale-related paradoxes. Without a boundless supply of individuals to fill the room, the cumulative probability of over 90% cannot be definitively achieved.
The Devil’s Shooting Room Paradox serves in this way as another illustration of how probabilities and cumulative effects can lead to counterintuitive outcomes.
CONCLUSION: THE LIMITS OF A MARTINGALE STRATEGY
The Martingale strategy is based on chasing losses, but its expected value remains zero or negative due to the house edge. The strategy’s viability is further diminished by limitations on bet size in real-world casino scenarios. As such, the Martingale system cannot be considered a winning strategy in practical gambling situations. The Devil’s Shooting Room Paradox further demonstrates the complexities and counterintuitive outcomes that can arise when infinite numbers are assumed. Ultimately, a comprehensive understanding of these paradoxes provides valuable insights into the rationality of betting strategies and decision-making in the realm of gambling.
Exploring the Poisson Distribution
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
A STATISTICAL TOOL
Thanks for reading Twisted Logic! Subscribe for free to receive new posts and support my work.
Subscribed
The Poisson distribution, inspired by the work of Siméon Denis Poisson, is a statistical concept that is particularly useful for helping us understand events that occur infrequently. It indicates the number of such events we can expect to occur in a fixed interval if we know the average rate at which they arrive. In simpler terms, if you want to predict how often something will happen over a certain period, and this event is infrequent, the Poisson distribution can be your go-to method for making this prediction.
This distribution finds practical applications in various fields, ranging from studying historical events to analysing everyday situations and even sports.
UNDERLYING ASSUMPTIONS OF THE POISSON DISTRIBUTION
The accuracy and applicability of the Poisson distribution hinge on several key assumptions:
Independence of Events: Each event must occur independently of the others. This means the occurrence of one event does not affect the probability of another event occurring.
Constant Average Rate: The events are expected to occur at a constant average rate. In other words, the average number of events per unit of time or space remains consistent throughout the period being considered.
Random Occurrence: The events occur randomly, without any predictable pattern or structure. This randomness is crucial for the Poisson model to provide accurate predictions.
Discrete Events: The events are distinct and countable. For instance, the number of emails received per day or the number of accidents at a particular intersection per month.
Understanding these assumptions is vital for correctly applying the Poisson distribution. It is most effective in situations where these conditions are met, such as modelling the number of meteor showers observed in a year, counting the number of times a rare bird is spotted in a forest, or predicting the number of cars passing through a toll booth in an hour.
It’s also very useful in predicting how likely you are to be kicked by a horse next week! The next section explains.
PREDICTING RARE EVENTS: PRUSSIAN CAVALRY OFFICER DEATHS
Let’s travel back in time to the 19th century, when the Poisson distribution was used to study a particular historical event. During this period, researchers were interested in understanding the number of Prussian cavalry officers who were kicked to death by horses in different Army regiments over a span of 20 years. This unfortunate occurrence was relatively rare, but was it random, or were there some underlying factors influencing their occurrence?
Enter Ladislaus Bortkiewicz, an economist and statistician. Bortkiewicz collected data from 14 corps over 20 years, which resulted in observations of yearly numbers of deaths per corps. Using the formula associated with the Poisson distribution, he was able to predict the number of such deaths in specific time intervals. These fitted quite closely to the observed data, indicating that the deaths were indeed random events, and nothing more mysterious or sinister.
This application of the Poisson distribution became a textbook example of real-world events that can be modelled as Poisson processes, which include radioactive decay, arrival of emails, number of phone calls received by a call centre, etc. The deaths of Prussian cavalry officers are an early example of a statistical study in the field of survival analysis.
WORLD WAR II BOMBING RAIDS
During the Second World War, a British statistician named R.D. Clarke used this method to study where the new V-1 ‘flying bombs’ were falling in London. He wanted to figure out if the German military was successfully targeting specific areas or if the bombs were falling randomly. This was strategically important information. It was clear that the V-1s sometimes fell in clusters. The question was whether this could be expected from random chance or whether precision guidance was at play.
To find out, Clarke divided London into thousands of small, equal-sized areas. He assumed to start with that each area had the same small chance of being hit by a bomb. This situation was similar to playing a game many times where you ‘win’ only infrequently. Clarke’s calculations showed that the number of bomb hits in each area matched what the Poisson distribution predicted for random hits. This meant that where the bombs fell seemed to be a product of chance, not because specific areas were targeted.
FROM HISTORY TO FOOTBALL: PREDICTING GOAL SCORING
In football, goals are a relatively infrequent event within the setting of a match, and so are suitable for the application of the Poisson distribution. This provides a simple and effective tool to examine and predict the likely incidence of goals in a match, based on historical data and average goal rates.
Consider, say, a match between two teams, one with an average goal rate of 1.6 goals per game and the other with an average goal rate of 1.2 goals per game. The Poisson distribution allows us to calculate the probabilities of various goal-scoring outcomes for this specific match.
For example, by examining the historical data and applying the Poisson distribution, analysts can estimate the probability of a goalless draw, a 1-1 draw, a win for either team, or any other scoreline based on the average goal rates of the teams involved.
More generally, the Poisson formula allows us to calculate the chance of observing a specific number of events of this kind when we know how often they usually occur on average. It considers the average rate and calculates the probability of obtaining the specific number we’re interested in.
REAL-WORLD APPLICATIONS
The practical applications of the Poisson distribution extend far beyond historical events and sports analytics. This versatile statistical concept finds relevance in a wide range of modern real-world scenarios, helping us understand and analyse various phenomena. Let’s explore some of its notable applications.
Homes Sold and Business Planning
Imagine you are a local estate agent. Understanding the number of homes you are likely to sell in a given time period is crucial for business planning and forecasting. The Poisson distribution provides a framework for estimating the probability of selling a specific number of homes per day, week, or any other timeframe based on historical data and average sales rates. This information helps in making informed decisions about marketing strategies, staffing, and resource allocation.
Disease Spread and Epidemiology
In the field of epidemiology, the Poisson distribution plays a vital role in understanding the spread of infectious diseases. By analysing historical data and considering the average rate of infection, researchers can utilise the Poisson distribution to estimate the likelihood of disease outbreaks and their progression.
Telecommunications and Network Traffic
The Poisson distribution finds application in the analysis of telecommunications systems and network traffic. By studying the arrival patterns of these events using the Poisson distribution, companies can anticipate network demand, allocate resources effectively, and ensure smooth and reliable communication services.
Quality Control and Manufacturing Processes
The Poisson distribution is also used in quality control, particularly in manufacturing settings. By analysing the number of defective products using the Poisson distribution, manufacturers can estimate the probability of observing a specific number of defects. This information helps them identify areas for improvement and enhance overall product quality.
Traffic Accidents and Road Safety
Another area where the Poisson distribution finds application is in analysing traffic accidents and road safety. By examining historical data on accidents, researchers can use the Poisson distribution to model accident rates based on factors such as location, time of day, and road conditions. This understanding helps in the development of targeted interventions to reduce accidents and improve road safety.
CONCLUSION: A POWERFUL TOOL FOR INFREQUENT EVENTS
The Poisson distribution is a valuable statistical tool that helps us understand and analyse events that happen infrequently but have an average rate of occurrence. It may seem complicated at first, but it allows us to make predictions and informed decisions based on probabilities. By using the principles of the Poisson distribution, we can gain insights into rare events and use that knowledge to improve various aspects of our lives.
Exploring Games of Chance
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
UNDERSTANDING THE CHEVALIER’S DICE PROBLEM
Probability is the science of uncertainty, providing a way to measure the likelihood of events occurring. It can be viewed as a measure of relative frequency or as a degree of belief. In the context of gambling, understanding probability is crucial for making informed decisions and avoiding common pitfalls.
A famous problem, known as the Chevalier’s Dice Problem, sheds light on the some of the intricacies of probability.
To understand the problem, it is essential to grasp some fundamental concepts of probability. Consider a single die roll—each outcome represents a possible event, such as rolling a 1, 2, 3, 4, 5, or 6. When rolling two dice, there are 36 possible outcomes (six outcomes for the first die multiplied by six outcomes for the second die).
THE FLAWED REASONING OF THE CHEVALIER
The Chevalier’s Dice Problem originated from a gambling challenge offered by the Chevalier de Méré, a 17th-century French gambler. The Chevalier offered even money odds that he could roll at least one six in four rolls of a fair die.
The Chevalier’s reasoning was based on the assumption that since the chance of rolling a six in a single die roll is 1/6, the probability of rolling a six in four rolls would be 4/6 or 2/3. However, this reasoning can be shown to lead to inconsistent results when extrapolated to more rolls.
The correct approach involves considering the independent nature of each throw of the die. The probability of a six in one go is 1/6, so the probability of not getting a six on that go is 5/6. To calculate the probability of not rolling a six in four throws, we multiply the probabilities: (5/6) × (5/6) × (5/6) × (5/6) = 625/1296.
Therefore, the probability of at least one six in four attempts is obtained by subtracting the probability of not rolling a six in any of those four attempts from 1: 1 − (625/1,296) = 671/1,296 ≈ 0.5177, which is greater than 0.5.
Despite his faulty reasoning, the Chevalier still had an edge in this game by offering even money odds on an event with a probability of 51.77%.
THE CHEVALIER’S MISSTEP WITH THE MODIFIED GAME
Encouraged by his initial success, the Chevalier expanded the game to 24 rolls of a pair of dice, betting on the occurrence of at least one double-six. His reasoning followed the same flawed pattern: since the chance of rolling a double-six with two dice is 1/36, he believed the probability of at least one double-six in 24 rolls would be 24/36 or 2/3.
The correct probability calculation involved considering the independent nature of each dice roll. The probability of no double-six in one roll is 35/36. Therefore, the probability of no double-six in 24 rolls is (35/36) raised to the power of 24, which is approximately 0.5086.
Subtracting this value from 1 yields the probability of at least one double-six in 24 rolls: 1 − 0.5086 = 0.4914, which is less than 0.5. Hence, the Chevalier’s edge in this modified game was negative: 49.14% − 50.86% = −1.72%.
This outcome demonstrated that even if the odds seem favourable, incorrect reasoning can lead to erroneous conclusions. The Chevalier’s faulty understanding of probability caused him to lose over time.
THE IMPORTANCE OF CORRECT PROBABILITY CALCULATION
These examples underscore the critical nature of accurate probability calculations in games of chance. While intuitive reasoning may seem convincing, it often leads to incorrect conclusions, as demonstrated by the Chevalier’s bets. Understanding the true probability of events is essential for informed decision-making in gambling and many other contexts where risk and uncertainty play significant roles.
THE GAMBLER’S RUIN AND UNDERSTANDING FINITE EDGES
The Gambler’s Ruin problem raises the complementary question of whether, in a gambling game, a player will eventually go bankrupt if playing for an extended period against an opponent with infinite funds, even if the player has an edge.
For instance, imagine a fair game where you and your opponent flip a coin, and the loser pays the winner £1. If you start with £20 and your opponent has £40, the probabilities of you and your opponent ending up with all the money can be calculated using the following formulas:
P1 = n1/(n1 + n2); P2 = n2/(n1 + n2)
Here, n1 represents the initial amount of money for player 1 (you) and n2 represents the initial amount for player 2 (your opponent). In this case, you have a 1/3 chance of winning the £60 (20/60), while your opponent has a 2/3 chance. However, even if you win this game, playing it repeatedly against various opponents or the same one with borrowed money will eventually lead to the loss of your betting bank. This holds true even when the odds are in your favour. This is an important lesson in risk management, emphasising the importance of not only the odds but also the size of one’s bankroll relative to the stake sizes.
The Gambler’s Ruin problem, as explored by Blaise Pascal, Pierre Fermat, and later mathematicians like Jacob Bernoulli, reveals the inherent risks of prolonged gambling, even with favourable odds.
PILOT ERROR: MISUNDERSTANDING CUMULATIVE PROBABILITY
In Len Deighton’s novel ‘Bomber’, a statistical claim suggests that a World War II pilot with a 2% chance of being shot down on each mission is ‘mathematically certain’ to be shot down after 50 missions. This assertion is a classic example of misinterpreting cumulative probability. In reality, if a pilot has a 98% chance of surviving each mission, their probability of not being shot down after 50 missions is 0.98 to the power of 50 (0.9850)which is approximately 0.36, or 36%. Thus, their chance of being shot down over these 50 missions is 64% (1 − 0.36), not 100%.
SURVIVORSHIP BIAS: THE CASE OF BULLET-RIDDEN PLANES
The concept of survivorship bias is vividly illustrated in the case of analysing planes returning from missions during World War II. Upon examining these planes for bullet holes, it was observed that most hits were on the wings, tail, and the body of the plane, with few on the engine. The initial, intuitive response might be to reinforce the areas with the most bullet holes. However, this would be a misinterpretation of the data.
The key realisation, identified by statistician Abraham Wald, was that the planes being analysed were those that survived and returned to base. The areas with fewer bullet holes, such as the engines, were likely critical to survival. Planes hit in these areas probably didn’t make it back, hence the lack of data for these hits. This understanding exemplifies survivorship bias—focusing on survivors (or what’s visible) can lead to incorrect conclusions about the whole population.
Wald’s insight led to the reinforcement of seemingly less-hit areas like engines, contributing significantly to the survival of many pilots. His work in operational research during the war provided a critical perspective on interpreting data and making decisions under uncertainty.
CONCLUSION: DICE, ODDS, AND RUIN
The Chevalier’s Dice Problem illustrates the importance of understanding probability in gambling scenarios. Probability theory, as developed through famed correspondence between Pascal and Fermat, has contributed to modern probability concepts and the understanding of risk involved in gambling.
The Gambler’s Ruin is a kind of warning from the world of probability, telling us that in gambling, a slight edge is no guarantee of success. Imagine two gamblers, one with an edge over the other but with much less money to play with. Even if the first player is more likely to win each round, their thinner wallet means they could run out of money after a few bad games. In contrast, the player with the deep pockets can keep playing longer, until (given enough money) luck swings their way. This underlines the importance and impact of losing streaks in games of chance.
The wartime examples highlight the real-world importance of understanding probability and statistical concepts accurately. They serve as a reminder that intuition can often lead us astray. Correctly interpreting data, especially in high-stakes situations, can have life-saving implications.
