Can prediction markets find missing MH370?
Confronting uncertainty.
For more than a decade, Malaysia Airlines Flight MH370 has occupied a strange place in the modern imagination: a wound that has never closed. A plane with 239 people on board does not simply vanish in the 21st century, or at least it shouldn’t. And yet it did!
Now, once again, the southern Indian Ocean is being searched.
A quiet, highly technical operation is underway, led by Ocean Infinity, under a renewed “no-find, no-fee” agreement with the Malaysian government. Its vessel, Armada 86 05, is deploying autonomous underwater vehicles capable of descending nearly 20,000 feet, scanning the seabed with sonar, magnetometers, and high-resolution 3D mapping. The target zone, around 5,800 square miles, has been refined using years of accumulated analysis.
There are no dramatic press conferences this time, no daily briefings. Just machines slipping silently into black water, searching terrain no human will ever see.
What we know, and what we still don’t
MH370 disappeared on 8 March 2014, forty minutes after take-off from Kuala Lumpur bound for Beijing. Military radar later showed the Boeing 777 deviating sharply from its planned route, flying south for hours into one of the most remote regions on Earth. Satellite data confirmed continued flight, but not where it ended, or why.
The largest multinational search in aviation history followed, at enormous cost. It failed to locate the main wreckage or flight recorders. And yet the evidence is no longer a blank page.
A flaperon, part of a wing control surface, was recovered on Réunion Island in 2015 and identified by investigators as almost certainly originating from MH370. Additional fragments, judged “very likely” to be from the aircraft, later washed up along the East African coast and Indian Ocean islands. Oceanographers refined drift models. Satellite analysts revisited the data again and again. The picture narrowed, even if it never snapped fully into focus.
This is where MH370 now sits: not in a fog of ignorance, but in a haze of probability.
Why this search feels different
Ocean Infinity has been here before. A 2018 seabed search came up empty. An earlier phase of this renewed effort was paused due to weather. Scepticism is not only understandable; it is rational. What has changed is not just technology, but synthesis.
The current search area reflects years of accumulated judgment across disciplines: aviation, satellite communications, oceanography, wreck recovery. It is, in effect, the best collective guess we can now make about where the aircraft lies.
And that brings me back to an idea I first explored in this context nearly ten years ago.
The problem of the “lone expert”
We like to imagine breakthroughs coming from a single decisive insight: the brilliant analyst, the overlooked data point, the final piece of the puzzle. But MH370 resists that narrative. No single expert, model, or dataset has been enough.
In problems like this, where uncertainty is vast and information fragmented, history suggests a different approach can work better: aggregating judgment.
In 1968, when the US Navy submarine USS Scorpion was lost, the search area was overwhelming. Instead of relying on one authoritative theory, experts were asked to make independent probabilistic assessments. When those assessments were combined, the wreck was found within a few hundred metres of the predicted location. The lesson is not mystical. It is practical. Groups of informed people, when aggregated properly, can outperform even the best individual experts.
In a sense, Ocean Infinity’s search already embodies this idea. But it does so informally, behind closed doors. A structured mechanism, such as a carefully designed prediction market restricted to qualified experts, can help surface neglected hypotheses, test assumptions, and dynamically re-weight search priorities as new information emerges.
This is not about “betting” in some probability exercise on tragedy. It is about recognising that uncertainty itself can be measured, and that human judgment, when pooled intelligently, is a tool rather than a weakness. The wisdom of the crowd is often greater than even its strongest member.
Confronting uncertainty
For the families and friends of the 239 people on board, this search is about being able to say, finally, this is where they are. It is about burial, mourning, and the end of limbo. For the rest of us, MH370 is a reminder of something deeply unsettling: that even in an age of satellites, big data, and constant connectivity, parts of the world, and parts of our systems, remain frighteningly opaque.
If Ocean Infinity succeeds, it will be a triumph of persistence and engineering. If it fails, the story should not end in resignation. The question then becomes not “why didn’t we look harder?”, but “did we think hard enough about how we look at all?”
The challenge is a great one – it is about how we should confront uncertainty, share knowledge, and search together when no one has the full answer.
And that question, unlike the aircraft, has never really disappeared.
Masterpiece or Miracle?
The Shroud of Turin is perhaps the only object on Earth that can make an avuncular nuclear physicist and a mild-mannered medieval historian lose their respective tempers at the same formal dress dinner party.
At first glance, it hardly looks like much: a long strip of ancient linen, faintly marked with the shadowy image of a man who appears to have been crucified. And yet this quiet, unassuming cloth has been examined, tested, photographed, scanned, and debated more than any artefact in history. As we approach 2026, we are arguably no closer to consensus than when it was first photographed in 1898. In some regards, we’re further away.
That’s because the Shroud refuses to sit still. Just when one side thinks it has won the argument, new evidence seems to pop up.
The straightforward case: A medieval masterpiece
If you’re sceptically inclined, the Shroud looks like a solved problem.
In 1988, three independent laboratories, in Oxford, Zurich, and Arizona, used radiocarbon dating to analyse samples from the cloth. All three placed it firmly in the medieval period, between 1260 and 1390 AD. That date lines up almost perfectly with the Shroud’s first clear appearance in the historical record, in 1354, in the French village of Lirey.
Even contemporaries were suspicious. The local bishop, Pierre d’Arcis, wrote to the Pope claiming the Shroud was a fake and that an artist had confessed to producing it. Medieval Europe, after all, was awash with relics; splinters of the True Cross, drops of holy blood, and saints’ bones by the cartload. Pilgrims meant money, and holy objects meant pilgrims.
Seen this way, the Shroud becomes an extraordinary but entirely human achievement: a brilliantly executed forgery from a period when forgery flourished. Apply the redoubtable Occam’s Razor, and the simplest explanation seems obvious. No miracle required, just a once-in-a-millennium artist.
And yet… the cloth itself won’t cooperate
The trouble is that when scientists actually look at how the image is formed, the neat medieval explanation begins to feel less neat.
In 1978, a team of American scientists was given unprecedented access to the cloth. They arrived expecting to find paint, pigment, or dye. What they found instead was baffling:
• The Photographic Negative: When Secondo Pia photographed the Shroud in 1898, he realised the image works like a negative; the light and dark values are reversed. Only when flipped does it look like a realistic human face. Medieval artists were talented, but deliberately painting a negative image centuries before the invention of photography?
• The Three-Dimensionality: The intensity of the image corresponds precisely to how far the cloth would have been from a body beneath it. Feed the image into a NASA VP-8 image analyser and, unlike any 2D painting or photograph, it produces a coherent 3D topographical map of a human form.
• The Microscopic Surface: The discolouration sits only on the outermost surface of the linen fibres, about 200 nanometres thick. It doesn’t bleed through the cloth. Modern laboratories struggle to reproduce this effect even with lasers. A medieval brush simply shouldn’t be capable of it.
At this point, even hardened sceptics tend to pause. But what about the 1988 studies?
The 2024 plot twist
For decades, the 1988 carbon dating has been the sceptic’s knockout punch. But in 2024, a new study reopened the fight.
Italian researcher Liberato De Caro used Wide-Angle X-ray Scattering (WAXS) to analyse how the linen’s cellulose structure has degraded over time. The results were startling: structurally, the Shroud’s fibres look almost identical to first-century linen recovered from the siege of Masada (c. 55–74 AD).
If this method holds up, it lends significant weight to the “Medieval Repair Hypothesis”. This theory suggests the 1988 samples, taken from a corner that had been heavily handled, burned in a 16th-century fire, and repaired by nuns, were not representative of the cloth as a whole. The date might be correct for the sample, but wrong for the Shroud.
While WAXS is a relatively new technique in the world of archaeology, it has turned what was once “settled science” back into a live debate.
Two explanations, both extraordinary
And so we arrive at an uncomfortable fork in the road.
Option A: The Natural Miracle. The Shroud is the product of a medieval mind with an understanding of anatomy, chemistry, optics, and image encoding that apparently wouldn’t be rediscovered for another five centuries.
Option B: The Supernatural Miracle. It is the physical trace of an unknown energetic event, something closer to a burst of radiation than a brushstroke, leaving behind a microscopic imprint of a body at the moment of resurrection.
Neither option is tidy. Both stretch the sceptical mind in different ways.
Why the Shroud won’t let us go
In the end, the Shroud of Turin may tell us more about ourselves than about the past. For some, it confirms a suspicion that faith manufactures its own evidence. For others, it offers a rare and unsettling hint that reality might be stranger than our models allow.
What seems clear is this: the Shroud does not behave like a painting, a photograph, or any known medieval artefact. It sits stubbornly at the boundary between belief and scepticism, history and physics, the proverbial ghost at the senior common room dinner table.
And perhaps that’s why, as we leave behind the year 2025, it still refuses to be neatly folded away.
Exploring a Shakespearean Tragedy
When Should We Trust a Loved One? Exploring a Shakespearean Tragedy
OTHELLO: THE BACKGROUND
Created by William Shakespeare, ‘Othello’ is a play centred around four main characters: Othello, a general in the Venetian army; his devoted wife, Desdemona; his trusted lieutenant, Cassio; and his manipulative ensign, Iago. Iago’s plan forms the central conflict of the play. Driven by jealousy and a large helping of evil, Iago seeks to convince Othello that Desdemona is conducting a secret affair with Cassio. His strategy hinges on a treasured keepsake, a precious handkerchief which Desdemona received as a gift from Othello. Iago conspires successfully to plant this keepsake in Cassio’s lodgings so that Othello will later find it.
UNDERSTANDING OTHELLO’S MINDSET
Othello’s reaction to this discovery can potentially take different paths, depending on his character and mindset. If Othello refuses to entertain any possibility that Desdemona is being unfaithful to him, then no amount of evidence could ever change that belief.
On the other hand, Othello might accept that there is a possibility, however small, that Desdemona is being unfaithful to him. This would mean that there might be some level of evidence, however overwhelming it may need to be, that could undermine his faith in Desdemona’s loyalty.
There is, however, another path that Othello could take, which is to evaluate the circumstances objectively and analytically, weighing the evidence. But this balanced approach also has its pitfalls. A very simple starting assumption that he could make would be to assume that the likelihood of her guilt is equal to the likelihood of her innocence. That would mean assigning an implicit 50% chance that Desdemona had been unfaithful. This is known as the ‘Prior Indifference Fallacy’. If the prior probability is 50%, this needs to be established by a process better than simply assuming that because there are two possibilities (guilty or innocent), we can ascribe automatic equal weight to each. If Othello falls into this trap, any evidence against Desdemona starts to become very damning.
THE LOGICAL CONTRADICTION APPROACH
An alternative approach would be to seek evidence that directly contradicts the hypothesis of Desdemona’s guilt. If Othello could find proof that logically undermines the idea of her infidelity, he would have a solid base to stand on. However, there is no such clear-cut evidence, leading Othello deeper into a mindset of anger and suspicion.
BAYES’ THEOREM TO THE RESCUE
Othello might seek a strategy that allows him to combine his subjective belief with the new evidence to form a rational judgement. This is where Bayes’ theorem comes in. Bayes’ theorem allows, as we have seen in previous chapters, for the updating of probabilities based on observed evidence. The theorem can be expressed in the following formula:
Updated probability = ab/[ab + c (1 − a)]
In this formula, a is the prior probability, representing the likelihood that a hypothesis is true before encountering new evidence. b is the conditional probability, describing the likelihood of observing the new evidence if the hypothesis is true. And finally, c is the probability of observing the new evidence if the hypothesis is false. In this case, the evidence is the keepsake in Cassio’s lodgings, and the hypothesis is that Desdemona is being unfaithful to Othello.
APPLYING BAYES’ THEOREM TO OTHELLO’S DILEMMA
Now, before he discovers the keepsake (new evidence), suppose Othello perceives a 4% chance of Desdemona’s infidelity (a = 0.04). This represents his prior belief, based on his understanding of Desdemona’s character and their relationship. Of course, he is not literally assigning percentages, but he is doing so implicitly, and here we are simply making these explicit to show what might be happening within a Bayesian framework.
Next, consider the probability of finding the keepsake in Cassio’s room if Desdemona is indeed having an affair. Let’s assume that Othello considers there is a 50% chance of this being the case (b = 0.5).
Finally, what is the chance of finding the keepsake in Cassio’s room if Desdemona is innocent? This would in Othello’s mind require an unlikely series of events, such as the handkerchief being stolen or misplaced, and then ending up in Cassio’s possession. Let’s say he assigns this a low probability of just 5% (c = 0.05).
BAYESIAN PROBABILITIES: WEIGHING THE EVIDENCE
Feeding these values into Bayes’ equation, we can calculate the updated (or posterior) probability of Desdemona’s guilt in Othello’s eyes, given the discovery of the keepsake. The resulting probability comes out to be 0.294 or 29.4%. This suggests that, after considering the new evidence, Othello might reasonably believe that there is nearly a 30% chance that Desdemona is being unfaithful.
IAGO’S MANIPULATION OF PROBABILITIES
This 30% likelihood might not be high enough for Iago’s deceitful purposes. To enhance his plot, Iago needs to convince Othello to revise his estimate of c downwards, arguing that the keepsake’s presence in Cassio’s room is a near-certain indication of guilt. If Othello lowers his estimate of c from 0.05 to 0.01, the revised Bayesian probability shoots up to 67.6%. This change dramatically amplifies the perceived impact of the evidence, making Desdemona’s guilt appear significantly more probable.
DESDEMONA’S DEFENCE STRATEGY
On the other hand, Desdemona’s strategy for defending herself could be to challenge Othello’s assumption about b. She could argue that it would be illogical for her to risk the discovery of the keepsake if she were truly having an affair with Cassio. By reducing Othello’s estimate of b, she can turn the tables and make the presence of the keepsake testimony to her innocence rather than guilt.
CONCLUSION: THE TIMELESS BAYESIAN
Shakespeare’s ‘Othello’ was written about a century before Thomas Bayes was born. Yet the complex interplay of trust, deception, and evidence in the tragedy presents a classic case study in Bayesian reasoning.
Shakespeare was inherently Bayesian in his thinking. The tragedy of the play is that Othello was not!
In a Nutshell
Atheism is often advanced as the intellectually cautious position, promising fewer commitments and no metaphysical extravagance. It’s just matter, laws, and chance. That restraint can feel like a virtue.
But caution cuts both ways. When we step back and ask which worldview best explains the world we actually observe, atheism turns out not to be modest at all. It repeatedly asks us to accept extraordinary coincidences across independent domains of reality, encompassing physics, consciousness, knowledge, and morality, while ruling out the one kind of unifying explanation that would make these features far less surprising.
A note on method
I am not in this nutshell proposing an argument for absolute certainty, nor an attempt to “prove” God from a single premise. Rather, this is an exercise in explanatory comparison.
The question is not, therefore, whether atheism is possible, but whether it is the most plausible account of the world we find ourselves in. Which hypothesis best explains the total evidence with the fewest unexplained coincidences? It is a question we routinely ask in science, history, and everyday reasoning.
1. A universe balanced on a knife-edge
Modern physics has revealed that the universe is balanced on a razor’s edge. Many fundamental constants, such as the strength of gravity, the expansion rate of the vacuum, the ratios among fundamental forces, must lie within extraordinarily narrow ranges for stars, chemistry, and life to exist at all. Small deviations would not merely produce a different kind of life; they would eliminate complexity entirely.
Perhaps more striking still, the universe is not only life-permitting but intelligible. It behaves according to stable, elegant mathematical structures, precisely the conditions required for minds capable of understanding it.
Atheism typically responds with “brute luck” or appeals to a speculative multiverse. Invoking luck at this scale functions less as an explanation than as a placeholder. Multiverse proposals, meanwhile, tend to shift the problem upward: why should a universe-generator exist that is itself so delicately configured as to produce even one intelligible, life-friendly world?
Theism offers a simpler expectation. If reality is grounded in a rational mind, a law-governed, life-friendly universe is exactly what we should expect to find. Every worldview will face some brute facts, but the claim here is comparative: theism leaves fewer and less arbitrary ones than atheism.
2. The harmony of mind and world
We are conscious. That alone is a deep mystery. But the more striking fact is how well our minds work.
- Our intentions reliably guide our actions.
- Our perceptions generally track reality.
- Our abstract reasoning uncovers deep truths about a physical world billions of light-years away.
Evolution can explain why certain behaviours aid survival. It is much harder, on a purely unguided picture, to see why our cognition should be so broadly truth-tracking, extending far beyond survival needs into higher mathematics, theoretical physics, and objective ethics. Naturalistic accounts exist, of course, but they tend to treat this expansive reliability as an unexpected bonus rather than something to be anticipated.
If atheism is true, the harmony between the “logic” of the stars and the “logic” of our minds is a colossal stroke of luck. If theism is true, it looks intentional.
3. The crisis of reason
Atheism inherits a quiet but serious epistemic problem. If our cognitive faculties are merely the unintended by-products of survival-driven processes, why should we trust them to deliver truth rather than merely useful delusions?
Even if evolution yields some degree of reliability, the more tightly our minds are tuned to reproductive success alone, the more puzzling it becomes that they also seem fitted for grasping deep, abstract truths about logic, mathematics, and metaphysics. If a worldview makes it plausible that our reasoning is unreliable in principle, then confidence in science and philosophy becomes precarious.
Theism offers a more stable foundation. If reality is grounded in a rational Creator, it is reasonable to expect that our faculties are generally fit for truth, even if imperfectly so.
4. Truths that aren’t negotiable
Most of us treat certain truths as objective and binding. Mathematical truths are not social conventions; moral truths are not merely tribal habits.
Many atheistic accounts can explain why we feel bound by morality, as a product of biology or social evolution. What they struggle to explain is why there really are stance-independent moral truths at all, or why a universe composed solely of particles in motion should contain genuine “oughts” rather than merely ingrained preferences.
On theism, moral and mathematical truths reflect the rational and moral structure of the Mind behind the universe. They are not accidents; they are foundational.
5. A world saturated with value
There are countless ways reality could have been devoid of value:
- nothingness,
- sterile laws incapable of complexity,
- life without consciousness,
- minds without the capacity for love or beauty.
Yet we inhabit a world saturated with meaning and moral seriousness. It is deeply flawed and often painful, yes, but unmistakably value-laden. Atheism typically treats this as a fortunate but ultimately inexplicable outcome of blind processes. Theism treats it as the point.
6. Why Christianity?
General theism points to a Mind; Christianity points to a Face.
Christianity makes the striking claim that God’s nature is revealed in history through the life, death, and resurrection of Jesus. What is immediately noticeable is how counter-intuitive this story is.
- A crucified Messiah was a scandal within Second Temple Judaism.
- A God revealed through weakness and self-giving love was an absurdity in Roman power-culture.
This is not the kind of narrative one invents to gain influence. At the centre of the faith stands a historical claim: the Resurrection. Once theism is taken seriously as a framework, the Resurrection becomes a historical question: what best explains the sudden transformation of the disciples and the birth of a movement grounded in a “victory” achieved through execution?
7. The shape of divine goodness
The Cross gives Christianity its philosophical depth. If God is perfectly good, how would divine love confront a world of guilt and suffering? Not through detached judgment, but through solidarity.
In the Cross, power is redefined as love willing to suffer for the sake of the beloved. It does not deny suffering; it insists that suffering is not final.
Why I am not an atheist: in a nutshell
In a few words, I am not an atheist because atheism asks me to believe that:
- a finely tuned, intelligible universe exists for no reason;
- consciousness and truth-seeking minds emerged by a fluke;
- objective moral and mathematical truths are binding but ultimately groundless;
- value pervades reality ultimately without foundation.
Theism does not answer every question; no worldview does. But it explains so much more with fewer and less arbitrary brute facts.
This is not a rejection of reason in favour of faith. It is an appeal to reason in its fullest sense: the search for the best explanation of reality as a whole.
And perhaps the most remarkable fact of all is not that we ask these questions, but that we exist in a universe intelligible enough, and meaningful enough, to ask them at all.
Lessons from a Beauty Contest
A version of this article appears in my book, Twisted Logic: Puzzles, Paradoxes, and Big Questions (Chapman and Hall/CRC Press, 2024).
THE NUMBER DILEMMA
In the Number Dilemma, participants must choose a whole number between 0 and 100, aiming to get closest to two-thirds of the average number chosen by all participants. This scenario tests not only numerical reasoning but also understanding of human behaviour.
LEVEL 1 RATIONALITY: CHALLENGING THE AVERAGE
If you were to assume that the other participants would choose a random number within the given range, the average number chosen by everyone would be 50. Under this assumption, you might believe that choosing 33, the nearest integer to two-thirds of 50, would provide a high probability of winning. This initial strategy, known as Level 1 rationality, might appear intuitively logical.
LEVEL 2 RATIONALITY: ANTICIPATING THE AVERAGE OF THE AVERAGE
However, upon closer inspection, a new insight emerges. Since you reasoned that choosing 33 was a smart move, it is reasonable to assume that other participants will arrive at the same conclusion. Consequently, the average number chosen by all participants would shift towards 33. To maximise your chances of winning, you decide to adopt Level 2 rationality and choose a number lower than 33. In this case, 22 appears to be the optimal choice.
LEVEL 3 RATIONALITY: GOING DEEPER INTO ANTICIPATION
As you delve deeper into the rationality levels, a pattern begins to emerge. Just as you contemplated that others might select 22, they too will likely adopt the same line of reasoning. To outsmart them, you employ Level 3 rationality and opt for the number 15. The idea is to anticipate the choices of others and select a number that is two-thirds of the average they might choose.
LEVELS OF RATIONALITY
In summary, the levels of rationality illustrate the iterative process of outthinking others.
APPROACHING ZERO: THE ULTIMATE RATIONAL CHOICE
As you progress through each level of rationality, however, you cannot help but notice a concerning trend. As rationality levels increase, choices converge towards zero, posing a paradox: Is zero really the most rational choice when considering human diversity in decision-making? Deep down, you begin to question the effectiveness of choosing zero.
ACCOUNTING FOR DIFFERENT LEVELS OF RATIONALITY
Your uncertainty arises from the realisation that not all participants are likely to think or reason in the same way. Variations in human rationality mean that some may choose randomly or with less strategic depth, affecting the overall average and optimal choice.
THE WINNING NUMBER
In a practical application of this dilemma involving Financial Times readers, the winning number was 13, showcasing the unpredictability of collective rationality.
KEYNESIAN BEAUTY CONTEST IN FINANCIAL MARKETS
The economist John Maynard Keynes encapsulated the essence of the dilemma in his work, ‘General Theory of Employment, Interest, and Money’. He likened professional investment to a newspaper competition where participants must select the prettiest faces from a selection of photographs. The prize is awarded to the competitor whose choices align most closely with the average choice of all participants.
OUTGUESSING THE BEST GUESSES
Keynes emphasised that participants should not merely choose what they believe to be the prettiest faces according to their own judgment or average public opinion. Instead, they should anticipate what average opinion expects the average opinion to be. In essence, winning the competition relies on outguessing the best guesses of others—a strategy referred to as super-rationality. Just as in the Number Dilemma, the Keynesian Beauty Contest involves predicting others’ predictions, a strategy crucial in financial markets and investment decisions.
DISCOVERING THE HIDDEN OPPORTUNITIES
In the context of this so-called Keynesian Beauty Contest, the concept of super-rationality holds tremendous significance. This strategy involves outthinking the crowd’s average opinion, a concept that can reveal overlooked opportunities in various contexts. By transcending the common line of reasoning and adopting a super-rational approach, individuals can unveil hidden possibilities and potentially reap rewards. While these concepts offer intriguing insights, their practical application is complex due to the unpredictable nature of human decision-making and diverse levels of rationality.
CONCLUSION: EMBRACING SUPER-RATIONALITY
The Keynesian Beauty Contest serves as a captivating thought experiment that challenges traditional notions of rational decision-making. It showcases the complexities of human behaviour and highlights the importance of anticipating the actions of others. By embracing the concept of super-rationality and outguessing the best guesses of the crowd, individuals can navigate these intricacies and increase their chances of success.
Exploring the Bad Luck Syndrome
THE SLOWER LINE PARADOX
Is the line next to you at the airport check-in or the supermarket check-out always quicker than the one you are in? Is the traffic in the neighbouring lane in heavy traffic always moving a bit more quickly than your lane? We’ve all experienced it. Or does it just seem that way?
THE ILLUSION OF THE SLOWER LINE
One explanation for the perception of always being in the slower line or lane can be attributed to basic human psychology. Our tendency to notice and remember the times when we’re left behind, while quickly forgetting the moments we overtake others, may play a role in this feeling. Or might it be an illusion caused by our tendency to glance over at the neighbouring option more often when we are progressing slowly rather than quickly? Additionally, our focus tends to be more forward-looking, so when driving, for example, vehicles we overtake quickly fade from our memory while those remaining in front continue to torment us.
The question then arises: Is this perception all an illusion or is there a real and fundamental phenomenon at play? Philosopher Nick Bostrom suggests that the effect is real and is the consequence of an observer selection effect. It is not just a trick of the mind.
THE SELECTION EFFECT
To understand why we might frequently find ourselves in the slower lane, let’s consider an example of fish in a pond. If we catch sixty fish, all of which are more than six inches long, does this evidence support the hypothesis that all the fish in the pond are longer than six inches?
The answer depends on whether our net is capable of catching fish smaller than six inches. If the holes in the net allow smaller fish to pass through, our sample of fish would be biased towards the larger ones. This is known as a selection effect or an observation bias.
Now, just as a fisherman’s net biased towards larger fish can misrepresent the pond’s population, our position in a slower lane biases our perception of overall speed and flow.
RANDOMLY SELECTED OBSERVERS
When considering whether we are more often in the slower of two lines at the supermarket checkout, it is crucial to ask: ‘For a randomly selected person, are the people in the next line actually progressing faster?’ We need to view ourselves as random observers and think about the implications of this perspective for our observations.
An apparent reason why we might find ourselves driving in a slower moving lane after choosing one of two apparently equal options is the greater number of vehicles in the slower lane compared to the neighbouring lane. Cars travelling at higher speeds are generally more spread out than slower cars, so a given stretch of road is likely to have more cars in the slower lane. Consequently, the average driver will spend more time in the slower lane or lanes. This phenomenon is known as an observer selection effect, where observers should reason as if they were randomly selected from the entire set of observers.
THE VIEWPOINT OF THE MAJORITY
To put it simply, if we perceive our present observation as a random sample from all observations made by all relevant observers, the probability is that our observation will align with the perspective of most drivers, and these are typically in the slower-moving lane. Because of this observer effect, a randomly selected driver will not only seem to be in the slower lane, but will actually be in the slower lane.
In other words, when we view ourselves as part of a larger group of observers, we realise that being in a slower lane or line is more than perception; it’s a statistical likelihood.
For instance, if there are 20 observers in the slower lane and 10 in the equivalent section of the other faster lane, there is a 2/3 chance that we are in the slower lane.
CONCLUSION: EMBRACING THE REALITY
So whenever we think that the other lane or line is faster, we should be aware that it very probably is. Our perception aligns with the reality that the slower line tends to contain more observers, leading to a higher likelihood of finding ourselves in it.
Understanding the Slower Line Paradox isn’t just about traffic or queues, though. It’s a lesson in perspective and probability, reminding us that our individual experiences often reflect broader regularities. So it’s not bad luck, after all, but sound statistics. Embracing this reality should make us feel a whole lot better! Until the next time it happens to us.
Exploring the Doomsday Argument
A version of this article appears in my book, Twisted Logic: Puzzles, Paradoxes, and Big Questions (Chapman and Hall/CRC Press, 2024).
CONTEMPLATING OUR EXISTENTIAL PREDICAMENT
Thanks for reading Twisted Logic! Subscribe for free to receive new posts and support my work.
The Doomsday Argument is a statistical and philosophical approach predicting humanity’s potential end. It uses principles of probability to suggest that humanity might be closer to its demise than we commonly believe.
PROBABILITY AND ITS IMPLICATIONS
Imagine attempting to estimate your enemy’s tank count. The tanks are sequentially manufactured, starting from one. You uncover serial numbers on five random tanks, all being under 10. In such a scenario, an intuitive grasp of probability would lead you to believe that your enemy doesn’t possess a large number of tanks. However, if you stumble upon serial numbers stretching into the thousands, your estimate would justifiably swing towards a much larger count.
In another scenario, consider a box filled with numbered balls, which can either contain ten balls (numbered 1–10) or ten thousand balls (numbered 1–10,000). If a ball drawn from the box reveals a single-digit number, such as seven, it is reasonable to assume that the box is much more likely to contain ten balls than ten thousand.
INVOKING THE MEDIOCRITY PRINCIPLE AND COPERNICAN PRINCIPLE
The tank and numbered balls examples tie closely to the concept of mediocrity, as captured in the ‘mediocrity principle’. This principle suggests that initial assumptions should lean towards mediocrity rather than the exceptional. In other words, we are more likely to encounter ordinary circumstances rather than extraordinary ones.
The Copernican principle dovetails with the mediocrity principle. It argues that we are not privileged or exceptional observers of the universe. This principle is rooted in Nicolaus Copernicus’s 16th-century finding that Earth does not occupy a central, special position in the universe.
GOTT’S WALL PREDICTION
Astrophysicist John Richard Gott took the Copernican principle to heart during his visit to the Berlin Wall in 1969. Lacking specific knowledge about the Wall’s expected lifespan, Gott took the position that his encounter with the Wall did not occur at any special time in its existence.
This assumption allowed him to estimate the future lifespan of the Wall. If, for instance, his visit was precisely halfway through its life, the Wall would stand for another eight years. If he visited one-quarter into its life, the Wall would stand for another 24 years. If visiting it three-quarters along its timeline, the future would be one-third of its past. Because half of its existence is between these two points (75% minus 25% is 50%), there was a 50% chance that it would last a further period between one-third and three times its current existence. Based on its age when he observed it in 1969 (eight years), Gott argued that there was a 50% chance that it would fall in between 8/3 years (2 years, 8 months) and 8 × 3 (24) years from then.
The Berlin Wall fell in 1989, 20 years after Gott’s visit and roughly 28 years after it was built. This bolstered Gott’s confidence in the Copernican-based method of making predictions, which he termed ‘Copernican time horizons’.
The implications of Gott’s Wall are far-reaching. They suggest that we could potentially apply the Copernican principle to make predictions about other systems where we have little information about their total lifespan. For example, it could be applied to predict the lifespan of a company, the duration of a war, or the longevity of a species, among many other things.
However, it’s essential to acknowledge the limitations of this method. It is predicated on the assumption that there is nothing special about the moment of observation, an assumption that may not hold true in many scenarios. Despite these limitations, Gott’s approach represents a fascinating application of the Copernican principle to real-world events, demonstrating how our position in time, just as in space, can be used to gain insights about the world around us.
THE LINDY EFFECT AND ITS LIMITATIONS
Gott’s method finds resonance with the ‘Lindy effect’, the name of which is derived from a New York delicatessen, famous for its cheesecakes, which was frequented by actors playing in Broadway shows. It suggests that a show that had been running for three years could be expected on average to last for about another three years.
However, the Lindy effect has limitations. It breaks down when applied to processes like biological ageing. For instance, a human who has lived for 100 years is very unlikely indeed to live another 100 years. The factors influencing human lifespan are far from random, rendering the Lindy effect ineffective for such predictions.
FROM COPERNICAN PRINCIPLE TO DOOMSDAY ARGUMENT
The Doomsday Argument employs Gott’s idea to estimate the Doomsday date for the human race. Applied to humanity, the argument contends that if we consider humanity’s entire history, we should statistically find ourselves somewhere around the middle of that history in terms of the human population. If our population continues to grow exponentially, this suggests that humanity has a relatively short lifespan left, potentially within this millennium.
ESTIMATES AND PROJECTIONS
This projection takes into account the fact that there have been approximately 110 billion humans on earth to date, 7% of whom are alive today. Following demographic trends forward and estimating how long it will be for a further 110 billion humans to be born, the Doomsday Argument anticipates humanity’s timeline is likely to end well within this millennium.
DEBATE AND CRITICISMS
The Doomsday Argument is not without its critics. Some argue that humanity will never go extinct, while others highlight that the argument’s assumptions might not hold true, such as the assumption that humans are at the midpoint of our existence timeline. Others claim that the argument fails to account for future scientific and technological developments that might significantly extend, or perhaps foreshorten, humanity’s lifespan.
CONCLUSION: THE FATE OF HUMANITY?
The Doomsday Argument provides a thought-provoking perspective on humanity’s potential fate. It integrates probability, statistics, and philosophical principles, offering a statistical guess at our collective demise. While it is far from conclusive, it is certainly important in serving as a reminder of our finite earthly existence and the urgency to address the global threats that could precipitate our doom. Whatever else, the debate around the argument and our ultimate fate as humans will persist, sparking further exploration into this fascinating intersection of probability, philosophy, and existential prediction.
A version of this article appears in my book, Twisted Logic: Puzzles, Paradoxes, and Big Questions (Chapman & Hall/CRC Press, 2024).
WHEN TO STOP LOOKING AND START CHOOSING
The ‘Secretary Problem’ is a classic scenario in decision-making and probability theory. The ‘Optimal Stopping Problem’ or ‘Secretary Problem’, as it is often called, offers insights into the dilemma of when to stop looking and start choosing. Whether it is finding the right partner, hiring the best assistant, or identifying the ideal place to live, this mathematical problem delivers a powerful solution.
CHOOSING A CAR
Let’s say that you have 20 used cars to choose from, offered to you in a random sequence. You have three minutes to evaluate each. Once you turn one down, there is no returning to it, such is the speed of turnover, but the silver lining is that you are guaranteed the sale of any vehicle you do select. If you come to the end of the line, you must accept whatever remains, even if it happens to be the least desirable. Your decision is guided solely by the relative merits of the vehicles on offer.
BALANCING BETWEEN TOO EARLY AND TOO LATE
There are two significant failures in your quest to find the best vehicle for you—stopping too early and stopping too late. If you stop too early, you might miss out on a better option. Conversely, if you stop too late, you risk passing over the best option while waiting for a better option that might not exist. So, how do you find the right balance?
INTRODUCING THE OPTIMAL STOPPING STRATEGY
Do you have a strategy that is better than random selection?
The Optimal Stopping Problem provides a solution. If there are three cars in the flash sale, optimal stopping strategy suggests rejecting the first option in order to gain more information about the relative merits of those available. If the second option turns out to be worse, you should wait, despite the risk of ending up with the third, which could potentially be the worst of the three. However, if the second option is better, you should accept it immediately, foregoing the possibility that the third might be a better match.
EXTENDING THE STRATEGY: FROM 4 TO 100
With four options, you should reject the first. Again, if the second is better than the first, take that. If not, and the third is better, take that. Otherwise, you must take the fourth and hope for the best. With a hundred options, you should inspect the first 37 and then choose the first after that which is better than the best of the first 37.
This strategy, often referred to as the 37% Rule, is based on the mathematical constant, e (Euler’s number). The value of 1/e is approximately 0.36788% or 36.788%, which rounds up to 37%. Following this rule, you have a 37% chance of finding the best car by employing this strategy.
THE GROUNDWORK
When faced with a choice of n candidates for a job, the challenge lies in deciding when to stop the process of rejection and start the process of selection. The mathematical answer to this, as highlighted before, is to reject the first n/e candidates, where ‘e’ is the base of natural logarithms, approximately 2.7. So, if there are 100 choices, n/e becomes 100/2.7, which is about 37. This strategy effectively breaks the selection process into two phases: the assessment phase and the selection phase.
The resulting principle is, therefore, surprisingly straightforward: reject the first 37% of candidates to gather information about the quality of the pool, then select the next candidate who is better than anyone seen so far.
REAL-WORLD APPLICATIONS
While the Secretary Problem is a simplified and somewhat idealised situation, the 37% Rule can have valuable applications in real-world scenarios:
Job Hiring: Hiring managers can use the 37% rule as a strategic guideline during the candidate evaluation phase.
Home Hunting: The principle is also applicable as a heuristic when looking for a home to buy or rent, especially in a fast-moving msrket.
Online Shopping: This principle can also be useful when shopping online to streamline purchasing decisions. By reviewing a certain portion of available options before making a selection, shoppers can reduce the overwhelming array of choices and enhance their overall shopping efficiency and satisfaction.
CRITIQUES AND LIMITATIONS
While the 37% Rule provides a theoretically optimal solution to the Secretary Problem, it does have certain limitations:
Idealised Assumptions: The problem assumes that options are presented one at a time, in random order, and once rejected, they cannot be recalled.
Risk of Missing Out: Following the 37% Rule means you run the risk of the best option being rejected during the assessment phase.
Difficult to Determine the Total Pool: The problem assumes you know the total number of options upfront.
Emotional Considerations: The rule neglects emotional considerations, personal intuition, and human subjectivity.
ADAPTING THE RULE FOR UNCERTAINTY
The rule can be adapted if there is a probability that your selection of a range of options might opt out or be withdrawn. For example, if there is a 50% chance that the selection might opt out or be withdrawn after selecting it, then the 37% rule can be converted into a 25% rule, reflecting the added uncertainty. There is also a rule-of-thumb for when the aim is to select a good option, if not necessarily the best. Out of 100 options, for example, the square root rule suggests seeing the first ten (the square root of 100) and then selecting the first option of those remaining that is better than the best of those ten.
CONCLUSION: EXPLORATION AND EXPLOITATION
The Secretary Problem teaches us about the balance between exploration (gathering information) and exploitation (making a decision), offering a structured approach to navigating complex choices. Despite limitations, the Secretary Problem and the 37% Rule offer valuable insights into these trade-offs. It also provides a mathematically grounded approach to making complex decisions.
Exploring the Expected Value Paradox
A version of this article appears in my book, Twisted Logic: Puzzles, Paradoxes, and Big Questions (Chapman and Hall/CRC Press, 2024).
UNDERSTANDING THE EXPECTED VALUE PARADOX
At its core, the Expected Value (EV) Paradox invites us to examine how outcomes deviate when we analyse them through the lens of a single ensemble event (a large group participating in an event once) vs. a multiple time event (a single individual participating in the event multiple times).
Take the example of a hypothetical coin-tossing game where players gain 50% of their bet if the coin lands on Heads and lose 40% if it lands on Tails. This game seems favourable for the player—the game has what is termed a positive expected value.
However, the paradox arises when the concept of time is introduced into the equation. While the game appears favourable in theory, it could lead to a net loss for an individual playing this game multiple times. As the coin is tossed more and more, the individual’s wealth may diminish over time, leading to a scenario where they lose all their money, even though the theoretical gain from playing the game is positive.
THE EXPERIMENT
Let’s set up an experiment involving a coin-tossing game with 100 participants, each with an initial stake of £10, to illustrate the difference. In this scenario, we’re employing what’s known as an ensemble perspective, where we’re examining a large group participating in an event once.
Statistically, given a fair coin, we would expect roughly half of the coin tosses to land on Heads and half on Tails. Therefore, of the 100 people, we predict that around 50 people will toss Heads and 50 will toss Tails.
If the coin lands on Heads, each of the 50 players stands to gain 50% of their stake, which is £5. In total, this translates to a combined gain of £250 (50 players × £5).
On the other side, if the coin lands on Tails, each of the remaining 50 players loses 40% of their stake, which is £4. This accumulates to a total loss of £200 (50 players × £4).
Subtracting the total loss from the total gain (£250 – £200), we find a net gain of £50 over all 100 players. When we average this out over the number of players, we see an average net gain of £0.5 (50 pence) per player (£50 ÷ 100 players), or 5% of the £10 initial stake.
THE PARADOX
The Expected Value Paradox becomes evident when we shift from an ensemble perspective, involving many people playing the game once, to a time perspective, involving one person playing the game multiple times.
Let’s examine a scenario where a single player engages in four rounds of the game, starting with a stake of £10. For simplicity’s sake, we’ll assume an equal chance of landing Heads or Tails—therefore expecting two Heads and two Tails.
When the coin lands on Heads in the first round, the player gains 50% of their stake, increasing their wealth to £15 (£10 + 50% of £10). If the coin lands on Heads again in the second round, their wealth grows to £22.50 (£15 + 50% of £15).
However, the game changes when the coin lands on Tails in the third round. The player loses 40% of their current wealth, reducing it to £13.50 (£22.50 minus 40% of £22.50). If the coin lands on Tails again in the fourth round, the player’s wealth decreases further to £8.10 (£13.50 − 40% of £13.50).
Despite starting the game with a positive expected value, the player ends up with less money than they started with. Even though the probabilities haven’t changed, the effects of winning and losing aren’t symmetric.
Thus, the Expected Value Paradox is clear in this example. When many people play the game once (ensemble averaging), the average return is positive, aligning with the expected value. However, when a single person here plays the game multiple times (time averaging), the player loses money.
TIME AVERAGING AND ENSEMBLE AVERAGING
In understanding the Expected Value Paradox, we are introduced to two different types of averaging: ‘time averaging’ and ‘ensemble averaging’.
TIME AVERAGING
‘Time averaging’ is a concept that comes into play when we are observing a single entity or process over an extended period. In the context of our coin-tossing game, time averaging refers to tracking the wealth of a single player as they participate in multiple rounds of the game. Over time, this player’s wealth fluctuates, often resulting in an overall loss despite the odds being in their favour. A severe loss (like bankruptcy) at any point can end the game for the player.
In our coin-tossing game, this would be akin to observing 100 players tossing the coin once. The overall gain camouflages the individual experiences, which can significantly vary—some players win, some lose.
ENSEMBLE AVERAGING
The ensemble average gives us a snapshot of the behaviour of many at a specific moment in time. The ‘ensemble probability’ refers to a large group’s collective experiences over a fixed period.
TIME VS. ENSEMBLE AVERAGING
This difference between ‘time probability’ and ‘ensemble probability’ underscores that a group’s average experience does not accurately predict an individual’s experience over time.
Understanding the distinction between these two types of averaging is crucial when interpreting outcomes of games, experiments, or any process involving randomness and repetition over time. This differentiation becomes especially important in fields like economics and finance, where these principles can guide strategy and risk management.
Strategies that work on an ensemble basis may not be effective (or could be disastrous) when applied over time by an individual—a paradox manifested clearly in our coin-tossing game.
SURVIVORSHIP AND WEALTH TRANSFER
Survivorship and wealth transfer are key elements in understanding how wealth moves around in situations like gambling and investing. The term ‘survivors’ refers to those who keep playing the game through various rounds, while ‘non-survivors’ are the ones who quit, or are pushed out, often because they’ve lost most or all of their money.
The idea is that the wealth lost by non-survivors doesn’t disappear. Instead, it gets transferred to the survivors, redistributing wealth within the system. Take a coin-tossing game as an example: if half of the 100 players lose everything and leave, while the other half double their initial amount, the group seems to break even. But, half of the players have nothing, while the other half have doubled their money.
CONCLUSION: THE INDIVIDUAL AND THE GROUP
In the conventional, or ensemble, view of probability, we look at the outcomes of many trials of an event and calculate averages. Some will win, some will lose, but overall the average outcome should reflect the true odds of the game. The individual variations or ‘paths’ of each person aren’t considered—we’re only interested in the average outcome. This so-called ensemble perspective is often used in classical statistics and probability theory. In contrast, the path-dependent view recognises that the order of events matters.
Take a person who plays a game 100 times. Even if the odds of each game are in their favour, they could still lose all their money if they have a run of bad luck. In this case, looking at the overall or ensemble average wouldn’t accurately reflect the individual’s experience.
In summary, while the ensemble view can provide a broad understanding of expected outcomes, the path-dependent view provides a more nuanced understanding of individual experiences.
The Martingale Betting Strategy
A version of this article appears in my book, “Twisted Logic: Puzzles, Paradoxes, and Big Questions” (Chapman and Hall/CRC Press, 2024).
The Martingale betting strategy is based on the principle of chasing losses through progressive increase in bet size. To illustrate this strategy, let’s consider an example: A gambler starts with a £2 bet on Heads, with an even money pay-out. If the coin lands Heads, the gambler wins £2, and if it lands Tails, they lose £2.
In the event of a loss, the Martingale strategy dictates that the next bet should be doubled (£4). The objective is to recover the previous losses and achieve a net profit equal to the initial stake (£2). This doubling process continues until a win is obtained. For instance, if Tails appears again, resulting in a cumulative loss of £6, the next bet would be £8. If a subsequent Heads occurs, the gambler would win £8, and after subtracting the previous losses (£6), they would be left with a net profit of £2. This pattern can be extended to any number of bets, with the net profit always equal to the initial stake (£2) whenever a win occurs.
CHASING LOSSES AND THE LIMITATIONS
While the Martingale strategy may appear promising in theory, it is important to recognise its limitations and the inherent risks involved. The strategy involves chasing losses in the hope of recovering them and generating a profit. However, it’s crucial to understand that the expected value of the strategy remains zero or even negative.
The main reason behind this lies in the presence of a small probability of incurring a significant loss. In a game with a house edge, such as in a casino, the odds contain an edge against the player. The house edge ensures that, over time, the expected value of the bets is negative. Therefore, even with the Martingale strategy, which aims to recover losses, the expected value of the bets remains unfavourable.
Moreover, in a casino setting, there are structural limitations that impede the effectiveness of the Martingale strategy. Most casinos impose limits on bet size. These limits prevent gamblers from doubling their bets indefinitely, even if they have boundless resources and time, thereby constraining the strategy’s potential for recovery.
THE DEVIL’S SHOOTING ROOM PARADOX
A parallel thought experiment known as the Devil’s Shooting Room Paradox adds an intriguing twist. In this scenario, a group of people enters a room where the Devil threatens to shoot everyone if he rolls a double-six. The Devil further states that over 90% of those who enter the room will be shot. Paradoxically, both statements can be true. Although the chance of any particular group being shot is only 1 in 36, the size of each subsequent group in this thought experiment is over ten times larger than the previous one. Thus, when considering the cumulative probability of being shot across multiple groups, it surpasses 90%.
Essentially, the Devil’s ability to continually usher in larger groups, each with a small probability of being shot, ultimately results in the majority of all the people entering the room being shot.
A key assumption underlying the Devil’s Shooting Room Paradox is the existence of an infinite supply of people. This assumption aligns with the concept of infinite wealth and resources often associated with Martingale-related paradoxes. Without a boundless supply of individuals to fill the room, the cumulative probability of over 90% cannot be definitively achieved.
The Devil’s Shooting Room Paradox serves in this way as another illustration of how probabilities and cumulative effects can lead to counterintuitive outcomes.
CONCLUSION: THE LIMITS OF A MARTINGALE STRATEGY
The Martingale strategy is based on chasing losses, but its expected value remains zero or negative due to the house edge. The strategy’s viability is further diminished by limitations on bet size in real-world casino scenarios. As such, the Martingale system cannot be considered a winning strategy in practical gambling situations. The Devil’s Shooting Room Paradox further demonstrates the complexities and counterintuitive outcomes that can arise when infinite numbers are assumed. Ultimately, a comprehensive understanding of these paradoxes provides valuable insights into the rationality of betting strategies and decision-making in the realm of gambling.
