Exploring Occam’s Razor
THE PRINCIPLE OF SIMPLICITY
In this section we explore Occam’s Razor. William of Occam (also spelled William of Ockham) was a prominent 14th-century philosopher and theologian known for his emphasis on simplicity in philosophical and theological matters. His philosophical contributions, particularly the principle of simplicity, have had a lasting impact on various fields of knowledge. Occam’s Razor, derived from his philosophy, has become synonymous with the method of eliminating unnecessary hypotheses and choosing the simplest explanation consistent with the evidence.
OCCAM’S RAZOR: PRINCIPLE AND EXPLANATION
At the heart of Occam’s philosophy, therefore, is the principle of simplicity, which later became known as Occam’s Razor. The razor can be summarised as follows: ‘Entities should not be multiplied without necessity’. In other words, when faced with competing explanations or hypotheses, the simplest one that adequately explains the available evidence should be preferred.
Occam’s Razor guides our thinking by encouraging us to avoid unnecessary assumptions and complexities. It suggests that we should prefer explanations that require fewer additional elements or entities. By choosing simplicity over complexity, Occam’s Razor helps us navigate knowledge acquisition and hypothesis formation.
To be clear, it’s important to note that simplicity does not mean ‘easier to understand’ but rather ‘involving fewer assumptions or conjectures’. Complexity should only be considered when simplicity fails to adequately explain the phenomenon.
OCCAM’S RAZOR: A CRUCIAL HEURISTIC
Occam’s Razor serves as a crucial heuristic in problem-solving and theory formulation. It proposes that among competing hypotheses, the one with the fewest assumptions should be selected, provided it adequately explains the phenomenon in question.
Occam’s Razor does not just simplify our thinking processes; it actively steers us away from the allure of unnecessary complexities and conjectures. By advocating for simplicity, it aids in refining our approach to knowledge acquisition and hypothesis development, ensuring that complexity is introduced only when absolutely necessary to explain the data adequately.
THE ROLE OF OCCAM’S RAZOR IN SCIENCE: TOWARDS ELEGANT EXPLANATIONS
The implications of Occam’s Razor extend significantly into scientific inquiry. It underpins the scientific method, where explanations for observed phenomena are sought and hypotheses are developed. By favouring parsimonious explanations, the principle encourages scientists to construct theories that are not only elegant but also more comprehensible and testable. This preference for simplicity has facilitated remarkable advancements in our understanding of the world, emphasising that the most profound explanations often emerge from the most straightforward assumptions.
OCCAM’S RAZOR AND OVERFITTING: COMPLEXITY AND GENERALISATION
Occam’s Razor finds empirical support in the phenomenon of overfitting, particularly in the field of statistics and machine learning. Overfitting occurs when a model becomes overly complex and fits the noise or random variations in the data instead of capturing the true underlying patterns.
By adhering to Occam’s Razor, researchers can avoid the pitfall of overfitting, ensuring that their models capture the essential features of the data while remaining parsimonious.
OCCAM’S LEPRECHAUN: AVOIDING AD HOC HYPOTHESES
In the pursuit of explanations, it is common to encounter situations where additional assumptions are introduced to save a theory from being falsified. These ad hoc hypotheses act as patches to compensate for anomalies that were not anticipated by the original theory. Occam’s Razor plays an essential role in evaluating such situations.
Imagine a situation, for example, where someone claims that a mischievous leprechaun is responsible for breaking a vase. There is likely to be serious scepticism about this claim. However, the person who made the claim introduces a series of ad hoc explanations to counter potential falsification.
For instance, when a visitor to the scene sees no leprechaun, the claimant asserts that the leprechaun is invisible. To test this, the visitor suggests spreading flour on the ground to detect footprints. In response, the claimant states that the leprechaun can float, thus leaving no footprints. The visitor then proposes asking the leprechaun to speak, but the claimant asserts that the leprechaun has no voice. In this way, the claimant keeps introducing additional explanations to prevent the hypothesis of the leprechaun’s existence from being falsified.
The example of Occam’s Leprechaun illustrates how additional assumptions can be added in an ad hoc manner to preserve a theory from being disproven. These ‘saving hypotheses’ create a flow of additional explanations that make the theory less able to be falsified. Occam’s Razor encourages us to be sceptical of such ad hoc hypotheses and instead favours simpler explanations that adequately account for the evidence.
OCCAM’S RAZOR AND PREDICTIVE POWER: PARSIMONY AND EFFICIENCY
Another aspect of Occam’s Razor is its association with the predictive power of theories. A theory that can accurately predict future events or observations based on fewer assumptions is considered more efficient.
By favouring simplicity, Occam’s Razor guides scientists to develop theories that provide explanatory power and predictive efficiency. The simplest theory that adequately explains the available data and accurately predicts future outcomes is often preferred.
This emphasis on predictive power aligns with the pragmatic approach that Occam advocated, where theories are judged not only by their ability to explain past observations but also by their ability to make successful predictions.
CONCLUSION: THE PURSUIT OF KNOWLEDGE
In sum, Occam’s Razor, a principle deeply rooted in the philosophy of William of Occam, remains a fundamental tool in the pursuit of knowledge. It encourages simplicity, efficiency, and parsimony in our explanations and theories. By guiding us towards the simplest explanations that remain consistent with the available evidence, Occam’s Razor plays a crucial role in scientific methodology, theory development, and everyday reasoning. Its continued relevance underscores the timeless appeal of simplicity in our quest to understand and explain the world around us.
And Does it Matter?
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
THE SIMULATION HYPOTHESIS
Is our reality a simulation crafted by a more advanced civilisation? This provocative question, central to the Simulation Hypothesis popularised by philosopher Nick Bostrom, challenges our understanding and perceptions of existence and reality. Bostrom’s concept of “ancestor simulations” proposes that a sufficiently advanced civilisation could simulate consciousness, allowing simulated beings to experience life as we know it. If such civilisations exist and choose to run these simulations, the likelihood that we’re in one rather than ‘base reality’ increases significantly.
The creators could be located at any stage in the universe’s timeline, even billions of years into the future.
It’s a sort of digital time travel, allowing them to witness and potentially even interact with their own past. Bostrom argues that if any civilisation reaches a high enough technological level to be able to run these sorts of simulations and is interested in doing so, we are more likely to be in one of these simulations rather than in ‘base reality’.
THE FOUNDATION OF BOSTROM’S SIMULATION ARGUMENT
Bostrom argues that at least one of three possibilities must be true:
- Civilisations at our level of development almost invariably fail to reach a technologically super-advanced stage. This failure is marked by their extinction or incapacity to develop the technological means necessary to create highly detailed simulations of reality including simulated minds.
- Among civilisations that do reach a super-advanced stage, possessing the ability to create highly detailed simulations of their ancestors or historical periods, there is an overwhelming lack of interest in actually conducting such simulations.
- We are almost certainly existing within a simulation ourselves. This follows from the assumption that if super-advanced civilisations have both the interest and capability to run numerous simulations, the number of simulated consciousnesses would vastly outnumber the number of “real” consciousnesses.
Bostrom’s argument invites us to consider the implications of our technological trajectory and the nature of consciousness. It proposes a framework where the advancement towards and the potential capabilities of super-advanced civilisations lead to a significant probability that our perceived reality might not be the base reality. This philosophical inquiry not only challenges our understanding of existence but also highlights the profound implications of future technological capabilities on our perception of reality and consciousness.
PROBING THE DEPTHS OF THE SIMULATION ARGUMENT
To fully grapple with these propositions, we must examine each statement individually. For the first proposition to be false, a civilisation would need to exhibit the capability to survive potentially catastrophic phases, whether they are caused intentionally, accidentally, or through ignorance, without succumbing to complete annihilation.
The second proposition is highly dependent on factors we can hardly predict, such as the ethical frameworks of advanced civilisations, their curiosity, and their respect for the integrity of intelligent consciousness. Even so, it might seem implausible that almost no civilisations with the capacity to create such simulations would choose to do so.
Unless civilisations either fail to reach the stage at which they can create such simulations or choose not to do so, then we must face a startling conclusion: we are very probably living in a simulation.
NAVIGATING THE PROBABILITY LANDSCAPE
Summarising the argument, a ‘technologically mature’ civilisation would have the capability to create simulated minds. Hence, one of the following must hold:
The fraction of civilisations reaching ‘technological maturity’ is close to zero or zero.
The fraction of these advanced civilisations willing to run these simulations is close to zero or zero.
We are almost sure to be living in a simulation.
If the first proposition holds true, our civilisation will almost certainly not reach ‘technological maturity’, which introduces a sense of urgency and uncertainty regarding our collective future. If the second proposition is true, then almost no advanced civilisations are interested in creating simulations, raising questions about the nature and motivations of advanced civilisations. If the third proposition is true, then we should challenge our entire perception of reality.
In the face of such profound uncertainty, we might find it pragmatic to assign equal weight to each of these propositions.
THE SINGULAR CIVILISATION HYPOTHESIS
But what if ours is the only civilisation that will ever reach our stage of development? This concept fundamentally changes the dynamics of the simulation argument. In correspondence with me, Professor Bostrom sheds some light on this question:
‘If we are the only civilization at our stage there will ever have been, then the equation remains true, although some of the possible implications become less striking … the probability that we are not in a simulation is increased if ours is the only civilization that will have ever existed throughout the multiverse’ (Nick Bostrom e-mail, 10 February 2021).
This assertion reinforces the complexity of the simulation argument and the profound effect that our assumptions about the universe have on our interpretations of existence.
CONCLUSION: THE PARADOX OF CREATION
The Simulation Argument presents a curious paradox. The closer we get to the point of being capable of creating our own simulations, the greater the probability that we are living in a simulation ourselves. As we stand on the precipice of creating our virtual realities, we would be faced with the startling possibility that we are simulated beings about to create a simulation.
By abstaining from creating these simulations, we could perhaps decrease the likelihood of us being simulated, indicating that at least one civilisation capable of such feats decided against it. But the moment we dive into creating simulated realities, we would be compelled to accept that we are almost certainly doing so as simulations ourselves.
This paradox inevitably leads to the obvious question: Who created the first simulation? Might that really be us? Such questions punctuate our exploration of the possibility of a simulated reality. The answers may reshape our very understanding of existence itself. But would it ultimately change anything?
Introducing the Guardian Principle
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
THE ORIGINS OF PASCAL’S WAGER
To understand the significance of Pascal’s Wager in decision-making processes, we must first trace its roots. Blaise Pascal is known for his immense contributions to mathematics and probability theory. One of his notable contributions to philosophy and decision theory, however, was his articulation of what has come to be known as Pascal’s Wager.
PASCAL’S WAGER: THE CRUX OF THE ARGUMENT
The wager posed by Pascal is simple yet profound. It can be paraphrased as follows: If God exists and you wager otherwise, the repercussions can be enormous. On the contrary, if God does not exist and you wager that he does, the implications are trivial in relative terms. Essentially, believing in God could lead to infinite rewards (eternal life in heaven), while the downside if he does not is comparatively inconsequential. Thus, Pascal urges you always to lean to the side of believing in God.
ADDRESSING THE ‘MANY GODS’ OBJECTION
The argument often raised against Pascal’s Wager is the ‘many gods’ objection. Detractors argue that numerous characterisations of God are conceivable, including those that punish believers. However, this counterpoint presumes that all representations of a god are equally plausible, which is an assumption that may not hold.
For instance, the existence of a deity described by a major established religion with millions or even billions of adherents and millennia of theological development and intellectual underpinning could be perceived as vastly more plausible than a fledgling religion with relatively few adherents or consistent theology.
THE ROLE OF HUMAN BIASES AND FUTURE REWARDS
The ability to appreciate uncertainty and the value of future rewards too often gets overshadowed by human biases. Humans are predisposed to discount the future, focusing on immediate rewards and overlooking long-term consequences. This cognitive bias makes people prone to underestimate future risks and rewards. Pascal’s Wager prompts us to consider future implications more seriously, offering a framework to factor in future gains or losses in decision-making.
PASCAL’S WAGER IN CONTEMPORARY CONTEXT: CLIMATE CHANGE AND NOAH’S LAW
The relevance of the thinking behind Pascal’s Wager isn’t confined to theological considerations. A parallel can be drawn, for example, between Pascal’s Wager and the urgency to act against climate change. Even if there were only a slim chance of catastrophic climate disaster, the consequences of inaction, considering the potential existential harm, would be too high to ignore.
This approach to climate change action has been dubbed ‘Noah’s Law’. It reflects the sentiment of Pascal’s Wager: if there’s a chance an ark may be essential for survival, it’s prudent to start building it now, regardless of how sunny the day might seem.
THE GUARDIAN PRINCIPLE
Building upon these concepts, I propose the introduction of a new ethical and operational guideline, which I call the ‘Guardian Principle’. This principle extends the foundational ideas of Noah’s Law and Pascal’s Wager into a broader, more encompassing approach. It advocates for a stance of proactive stewardship over our planet and society, emphasising the importance of pre-emptive action in the face of potential existential threats, not limited to climate change but extending to all manner of such risks.
The Guardian Principle calls for an ethos of precaution and responsibility, urging humanity to act as guardians of its own future and the future of our shared environment. It suggests that in situations of significant uncertainty but potentially devastating outcomes, we should err on the side of caution and engage in preventative measures against a wide array of existential risks. In this way, we fulfil a collective duty to safeguard the well-being of current and future generations against all forms of irreversible harm.
By integrating the Guardian Principle into our global ethos, we expand the narrative from merely avoiding disaster to actively cultivating a safe, sustainable future. It’s a call to not only build arks against impending floods but to seek to prevent the floods themselves, and to act more broadly as vigilant guardians against potential threats. It encourages us not just to react but to anticipate, mitigate, and ideally avert existential risks through foresight, innovation, and collective action.
In this light, the Guardian Principle does not just complement the logic behind Pascal’s Wager and Noah’s Law; it amplifies it. It reinforces the argument that inaction in the face of existential uncertainty is not an option. Instead, we are urged to embrace a more vigilant, proactive approach, turning existential anxiety into a catalyst for holistic and forward-thinking action. In this way, it is a call for a shift in perspective – from reactive measures to a stance that actively seeks to prevent, mitigate, and anticipate risks before they manifest. It’s about building a legacy of sustainability, resilience, and foresight. It is a call to action that resonates with Pascal’s Wager.
PASCAL’S MUGGING: A MODERN SPIN
A modern spin on Pascal’s Wager, Pascal’s Mugging, presents a scenario where a stranger promises a life-changing return on a relatively small sum, or to wield an existentially negative impact if they don’t receive this sum. Even if the chance of the claim being true is infinitesimal, it might seem rational to hand over the sum, such is the scale of the reward or consequence compared to the outlay. It is a dilemma that underscores the need for a pragmatic balance between scepticism and action in the face of uncertainty.
CONCLUSION: PASCAL’S LIGHTHOUSE
Pascal’s Wager has assumed a new critical relevance in our times. With the stakes being higher than ever in terms of global existential risks, the urgency to revisit and appreciate the wager’s lessons has heightened.
While it might seem counterintuitive to expend resources and energy to avert what may be perceived by at least some as small risks, Pascal’s Wager prompts us to think otherwise. As the wager illuminates, the potential stakes of inaction—be it eternal damnation in a theological context or irreversible climate disaster in a worldly sense—may far outweigh the cost of preventive measures. As we steer through a world beset with systemic risks and uncertainties, Pascal’s Wager, and the Guardian Principle it inspires, can serve as a lighthouse, guiding us away from the rocks and towards prudence, long-term thinking, and existential risk management.
Every four years, as the U.S. Presidential election draws near, data enthusiasts eagerly dive into the world of forecasts and predictions, while the broader public often faces a mix of excitement and anxiety. With the election date of November 5th rapidly approaching, everyone is asking the same question: Who is most likely to win? And there is no shortage of forecasters ready to provide their answers.
The Common Ground Among Election Forecasters
At first glance, many election models seem to rely on similar data: state-level polls, national polls, economic indicators, and approval ratings. While these factors offer valuable insights, the U.S. Presidential election ultimately hinges on winning a majority of the 538 electoral votes allocated among the 50 states. As a result, most forecasters focus more heavily on state-level polling, using simulations to estimate each candidate’s probability of winning in individual states, which are then aggregated into a national forecast. This is particularly the case as the election draws nearer, with the broader signals receding into the background. We are not yet at that stage, however.
Key Decisions in Polling Models
As election day nears, forecasters face two critical decisions:
- Aggregating State-Level Data: Combining state-level polls and other relevant data to estimate the likelihood of each candidate winning the electoral votes assigned to each state.
- Formulating a National Forecast: Deciding how to aggregate these state-level probabilities into a single national outcome, especially when accounting for the potential for correlated polling errors across states.
The weight given to these decisions can vary depending on how close the race is. In a tightly contested race, small adjustments in state-level probabilities can significantly impact the overall forecast. Conversely, in a race where one candidate has a clear lead, a particular challenge is to accurately factor in the possibility of correlated errors, which can heavily influence the probability assigned to less likely outcomes.
Diverse Approaches to Election Forecasting
Currently, several major models are providing live forecasts for the 2024 election, each with its unique methodology:
- FiveThirtyEight: A well-known model now under new leadership, which has recently refined its methodology and actively adjusts polling inputs based on new data.
- The Economist: Applies an evolved version of the Votamatic model, which has shifted to a more closed-source approach but still provides a general overview of its methods.
- Princeton Election Consortium (PEC): Focuses heavily on state-level polling but has been criticized for not adequately addressing the correlation of polling errors across states, an issue that significantly affected its 2016 predictions.
- PollyVote: Founded in 2004, PollyVote emphasises combining multiple forecasting methods, such as polls, prediction markets, expert judgment, and econometric models, to enhance accuracy. Over time, it has added new components, such as citizen forecasts, continuously refining its approach to align with evidence-based principles.
- Decision Desk HQ: This model employs a range of machine-learning techniques to predict election outcomes. Although the precise details of its approach are not fully transparent, it ultimately uses a simulation-based forecast similar to other models. Decision Desk HQ is known for its fast reporting on election night, combining its sophisticated modelling with a robust data-gathering network.
- Data Diary: Data Diary adapts The Economist’s 2020 model into a more fully Bayesian framework, allowing it to handle uncertainty and new information more dynamically. The model uses complex statistical methods to integrate multiple data sources, giving it the flexibility to adjust to shifts in public opinion and other election dynamics.
Three other notable platforms that provide unique insights into election outcomes are Betfair, Polymarket, and PredictIt:
- Betfair: One of the largest and most established form of prediction market, Betfair allows participants to bet on a wide range of political outcomes. As a major player in the online betting world, Betfair aggregates the collective sentiment of users on numerous markets, including U.S. elections. The market odds fluctuate in real-time based on the volume and direction of bets, providing a “wisdom of the crowd” perspective that reflects public sentiment and market confidence in a candidate’s chances.
- Polymarket: A blockchain-based prediction market that also offers betting on political events, Polymarket enables users to buy and sell shares in various election outcomes using cryptocurrency as the medium of exchange. Because it is decentralised and leverages blockchain technology, Polymarket tends to attract a tech-savvy audience that may offer different insights from traditional prediction markets. The market’s odds shift based on trading activity, providing a dynamic view of public opinion.
- PredictIt: An online prediction market launched in 2014, PredictIt allows participants to buy and sell shares on the outcome of political events, using a continuous double auction format. It is very influential, with over 160 academic data-sharing partners.
Current Predictions and Points of Disagreement
Most models agree on a some key points, for example tha: Kamala Harris has improved upon Joe Biden’s polling numbers, and that a very small number of key states are crucial to the overall outcome. However, they diverge on who currently holds the lead. FiveThirtyEight, The Economist, PollyVote, Decision Desk HQ, and the Princeton Election Consortium, currently lean towards Harris, the latter two albeit very marginally. The Silver Bulletin and Data Diary give Trump the edge, albeit only Nate Silver’s Bulletin by more than a whisker. Meanwhile, prediction markets like Betfair and PredictIt currently tilt to Harris, in line with major bookmakers, while Polymarket also leans to Harris, but only by the finest of margins. The much-heralded presidential debate between Harris and Trump, which the Vice President is widely acknowledged to have won decisively, is only just beginning to filter into the polls. It will be fascinating to see what immediate difference, if any, that makes.
Looking Ahead
In the coming weeks, it will be interesting to examine each of these models more closely, assessing their strengths and weaknesses, their historical accuracy, and the reasons behind their different predictions. Until the actual results come in after November 5th, though, that is all they are – forecasts – and some will prove rather more prescient than others. I have my own thoughts on that, but I’ll just say that I tend, based on my own published research, to trust at any point in time the market aggregate, if I’m pushed to choose, over the outlier.
The first US presidential debate of this latest election cycle proved to be a high-stakes battle that may have significantly altered the landscape of the race. From the moment Vice President Kamala Harris, the Democratic nominee, confidently approached Donald Trump and extended her hand, the dynamics of the evening began to shift dramatically. The betting markets, which initially favoured Trump, soon started to move in Harris’s direction. But it wasn’t until Trump made the unsupported claim that immigrants in Ohio were “eating dogs and cats” that the betting markets saw a full crossover, favouring Harris as the likely winner of November’s election.
Harris Takes Control
Harris knew a lot was riding on this debate. With a tightening race and her momentum stalling in the days leading up to the event, she needed to make a strong impression on the many Americans who still felt they didn’t know her well. Presidential debates are known for providing a unique platform for relatively less exposed candidates to boost their visibility, and Harris seized this opportunity with a well-executed strategy.
In her prosecutorial style, Harris managed to control the debate’s tempo from the start, putting Trump on the defensive. Her pointed questions and direct eye contact were reminiscent of her days as a prosecutor. Early on, she achieved a key objective: getting under Trump’s skin. By highlighting his dwindling rally attendance and suggesting his speeches had become dull and boring, she struck a nerve. From that moment on, Trump struggled to stay focused, frequently veering off into bizarre claims and tangents.
Trump’s Performance: Unhinged and Unstable?
Trump’s performance quickly turned chaotic. His outlandish statements—such as the claim about immigrants eating pets in Ohio and his unsupported assertions about post-birth killings in Democratic states—only served to further unravel his argument. Each time Harris poked him, he took the bait, spiralling deeper into his grievances about the 2020 election and other well-worn talking points. For viewers at home, the contrast was stark: a calm and composed Harris against a visibly agitated Trump.
Even Trump’s attempt to pivot to his foreign policy record backfired. Harris pointedly mocked his admiration for Vladimir Putin and highlighted the potential consequences of a Trump presidency for Ukraine and Eastern Europe. When Trump refused to clearly state his position on Ukraine winning the war, it only seemed to add to a general sense that he was on shaky ground.
The Betting Markets Shift Decisively
The impact of the debate on the betting markets was immediate and decisive. As Trump continued to dig himself into a hole with every rambling answer, Harris’s position strengthened. By the halfway point of the debate, the markets were giving her a 97% chance of being declared the winner in post-debate polling. Her dominant hold only solidified further as the debate moved on to its final conclusion, leaving her as the clear election favourite going forward.
A Second Debate?
Now, speculation turns to whether there will be a second debate. Harris is reportedly keen for another round, while Trump appears less enthusiastic. The question looms: who stands to gain more from another showdown? With expectations now so low for Trump, even a modest improvement could help him. On the other hand, another performance similar to the first could be potentially fatal for his campaign.
So will there be a second debate? And if so, can Trump change the narrative, or will Harris deliver another blow to his campaign?
What Lies Ahead?
While Harris emerged from the debate as the clear winner, there are still challenges ahead. Some undecided voters felt she wasn’t specific enough on policy positions, choosing instead to focus on making Trump implode and outlining her broader vision for the country. However, she did manage to effectively argue against Trump’s suitability for the presidency, a critical move as voters in key states prepare to cast their ballots.
Debates often have a short-lived impact, as seen in Trump’s 2016 first debate against Hillary Clinton, where he was widely perceived to have lost but went on to win the election. Nevertheless, Harris’s strong performance may have come at a crucial moment. The race is far from over, but this debate could mark a pivotal turning point in her campaign’s favour.
A version of this article was first published in The Conversation UK
Election Betting
Records of the betting on US presidential elections can be traced back to 1868. Since then, no clear favourite for the White House had lost before 2016, except in 1948, when the 8 to 1 longshot and sitting president, Harry S. Truman, famously defeated his Republican rival, Thomas E. Dewey.
In 2016, the exception was repeated when Hillary Clinton, trading at 7 to 2 on (equivalent to a win probability of about 78%) as polls opened, lost in the electoral college to Donald Trump. In so doing, Trump defied not just the polls and the experts, but the “wisdom of the crowd” as displayed in the betting markets.
Trump achieved this by converting a near 3 million vote loss in the popular vote into a victory by 77 votes in the electoral college. In a larger sense, it might be said that crowd wisdom was trumped by the arcane US electoral system.
There was a similar consensus in the run-up to the 2020 election that Trump would lose – but the degree of confidence displayed by the markets and the models diverged markedly. To illustrate, Sporting Index, the spread betting company, announced it thought Joe Biden would win with between 305 and 311 electoral votes as the polls opened on election day, with Trump trailing on 227 to 233 electoral votes.
Taking the mid-points of these spreads, this equated to a Biden triumph by 308 votes to 230 in the electoral college – a majority of 78. Similar estimates were contained or implicit in the odds offered by other bookmakers, betting exchanges and prediction markets.
Forecasting Models
Meanwhile, other major forecasting models were much more bullish about Biden’s prospects. Based on 40,000 simulations, the midpoint estimate of the model provided by Nate Silver and FiveThirtyEight put Biden ahead by 348 electoral college votes to 190 for Trump, a margin of 158. The New Stateman model made it 339 votes to 199 in favour of Biden. The Economist’s model was even more lopsided in favour of Biden, estimating that he would prevail by 356 electoral votes to 182. Taking the unweighted mean of all three forecasting models, Biden was projected to win 348 votes in the electoral college to 190 for Trump.
The other go-to place for expert opinion with a long track record of solid performance (except in 2016) is Sabato’s Crystal Ball based at the University of Virginia’s Center for Politics. It was projecting Biden to win the electoral college by 321 votes to 217. The PollyVote project goes a step further, combining information contained in betting markets with forecasting models, experts and beyond. It forecast a Biden victory by 329 electoral votes to 209.
Last bets please
When the dust had finally settled, literally and figuratively, Biden ended up with 306 votes in the electoral college to 232 for Trump. As such, the betting spreads were almost spot on. In fact, both these numbers were within the spreads offered on election day.
What this tells us is that the betting and prediction markets, which respond to the weight of money traded on each candidate, and are informed by considerable professional insight, recovered in 2020 a reputation dating back to at least 1868, and in the case of the Papal betting markets as far back as 1503.
Interestingly, a couple of weeks after declaration of all election results, Trump still merited a 7.8% chance of clinging on to office, according to the betting exchange trading. This factored in all the ways he might seek to reverse the declared results. January 6th was still a little while off.
I asked at the time whether it was likely that he would prevail over all established custom and evidence. Not at all, I ventured. Was it possible? Yes, I thought it was. In the event, 7.8% was, at the time, probably about right.
A Balanced View
As a long-time follower of political forecasts, and creator of some, I’ve often found Nate Silver’s insights both intriguing and valuable. Silver, the founder of FiveThirtyEight and now running his own platform, The Silver Bulletin, has built a reputation as one of the most influential data analysts in politics. But with the 2024 U.S. presidential election approaching, I find myself asking: Can we still trust Nate Silver’s predictions?
Understanding Why Polling Aggregators Differ
To answer this question, it’s important to understand why different polling aggregators might come to different conclusions, even when using similar data. Most aggregators, such as FiveThirtyEight, The Economist, and The Silver Bulletin, rely on state-level polls, national polls, approval ratings, and economic fundamentals to make their predictions.
However, the differences arise in two critical areas of judgment:
- How state-level polls are aggregated to determine each candidate’s chances of winning a state’s electoral votes. In a tight race like 2024, small adjustments in key states can significantly affect the overall probability.
- How these state-level probabilities are aggregated into an overall probability of winning the election, especially considering correlated polling errors across states. This was crucial in 2016 when correlated polling errors across states led to a surprise outcome.
Silver’s approach has drawn attention this cycle because of how it handles these judgments, and the resulting forecasts sometimes differ significantly from other models.
The “Convention Bounce” Adjustment: Reasonable or Overdone?
One of the most debated aspects of Silver’s current methodology is his use of a “convention bounce” adjustment. This adjustment accounts for the temporary bump candidates often receive in the polls following their party conventions. Silver argues that polls taken just after the Democratic National Convention (DNC) might temporarily overstate Kamala Harris’s support, so his model includes an adjustment to correct for this.
On the surface, this seems like a logical adjustment. However, it has sparked some debate. For example, Silver’s model recently moved more than three percentage points against Harris during a period when no significant new data emerged—no major events, no new battleground state polls, no economic changes. While other models, like The Economist’s or Decision Desk HQ’s, remained stable during the same period, reflecting the lack of new information, Silver’s swung noticeably.
This has led some observers to question whether the “convention bounce” adjustment is appropriately calibrated. Silver defends the adjustment as “highly defensible” and notes that it will phase out gradually, but this episode does raise questions about how such adjustments impact the overall forecast.
Is Silver’s Model Overly Reactive to Limited Information?
Another critique is that Silver’s model appears unusually reactive, responding to minor changes in data with relatively large movements. In a situation where there’s no significant new information, most models would show minimal movement. Yet, Silver’s has shifted markedly at times.
Silver’s supporters might argue that his model is designed to be highly responsive to any new data, reflecting the inherent uncertainty and volatility of the electoral landscape. However, this reactivity has led to speculation that the model might be more prone to showing movement, potentially to maintain engagement, even when the underlying data doesn’t fully justify such shifts.
The underlying question remains: Is this reactivity a strength, showing that Silver’s model adapts quickly to any new information, or a weakness, suggesting it might overemphasise minor fluctuations?
The Electoral College Focus: An Essential Consideration or Overplayed?
Silver also places considerable emphasis on the challenges posed by the Electoral College. He points out that even if Harris wins the popular vote by a few points, she could still lose the presidency due to the distribution of votes across key states.
This focus on the Electoral College is certainly valid; history has shown that a candidate can win the popular vote but lose the presidency. However, some critics question whether Silver’s emphasis on this factor might be overstating its impact compared to other models. Every major aggregator considers the Electoral College in their forecasts, but Silver’s model arguably amplifies its importance relative to the others.
Here, too, there are two sides. Silver’s supporters argue that his focus on the Electoral College is a necessary reminder of how U.S. elections actually work, ensuring we don’t overlook its impact. On the other hand, some might see it as a way to make his model appear more unique or insightful than it truly is.
Conclusion: Should We Trust Nate Silver’s Forecast in 2024?
So, where does that leave us? Can we trust Nate Silver’s model for the 2024 election? On the one hand, Silver’s experience, his focus on “tail risks” (scenarios where less likely but still possible outcomes could occur) and correlated polling errors, make his forecasts a valuable tool in understanding the electoral landscape. On the other hand, some of his adjustments and the model’s reactivity have raised questions about whether his approach this cycle might be about engagement as much as precision.
Ultimately, it may be wise to consider multiple perspectives. Nate Silver’s forecasts should certainly be a part of that consideration, but not the only voice in the conversation. Given the complexities of this election, looking at what other models and aggregators are saying—and understanding their different methodologies and assumptions—might offer a more balanced view.
Looking Ahead
As we get closer to Election Day, it will be interesting to see how Silver’s model and others evolve with new data. Staying informed through multiple sources and understanding their approaches will be key to navigating this race. Ultimately, no single model is likely to have all the answers, or even to be asking all the right questions.
Exploring the Four Card Problem
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
The Four Card Problem
The Four Card Problem, also known as the Wason selection task, is a captivating puzzle that tests our logical reasoning abilities. Invented by Peter Cathcart Wason, this task challenges us to determine the minimum number of cards required to verify or falsify a given statement. Let’s look deeper into this intriguing problem.
The Scenario: Card Setup
Imagine being presented with four cards, each displaying either a letter or a number. These cards lay the foundation for the puzzle, providing the information necessary to reach a conclusion. Let’s examine an example:
The face-up sides of the cards show: 23; 28; R; B
Each card has a letter on one side and a number on the other side.
Alongside these cards, you are given a statement: ‘Every card with 28 on one side has R on the other side’.
Determining the Minimum Number of Cards
Now, the crucial question arises: How many cards must you turn over to determine the truthfulness of the given statement? And which specific cards should you investigate?
Common Misconceptions
At first glance, the task might appear deceptively simple. Many individuals are inclined to turn over the R card, assuming it holds the key to verifying the statement. However, this line of thinking is misguided. Regardless of what is on the other side of the R card, it does not contribute to determining whether every card with 28 on one side has R on the other.
Similarly, the inclination to turn over the 23 card is also misleading. Even if the 23 card reveals an R on its other side, it does not provide any insight into the truthfulness of the statement. The existence of R on the opposite side of the 23 card merely confirms that the statement ‘Every card with 23 on one side has B on the other side’ is false. It does not shed light on the validity of the statement regarding the 28 card and R.
The Key to Solving the Puzzle: Logical Analysis
To arrive at the correct solution, we must identify the cards that have the potential to disprove the given statement. The crucial observation lies in recognising that only a card displaying 28 on one side and something other than R on the other side can invalidate the statement.
In this scenario, the cards we need to focus on are the 28 card and the B card. Let’s explore the reasoning behind this.
The Correct Solution: Minimum Number of Cards
The Card with 28 on Its Face-Up Side: This is the most direct test of the statement. If the other side is not R, the statement is false.
The Card with B on Its Face-Up Side: This card needs to be checked because if the other side is 28, it would contradict the statement. The statement only mentions what is on the other side of 28, not what is on the other side of R.
The cards with 23 and R on their face-up sides do not need to be checked. The card with 23 is irrelevant to the rule, which only concerns 28. The card with R does not need to be checked because the rule does not specify what should be on the other side of R.
So, you only need to turn over two cards: the one showing 28 and the one showing B.
Conclusion: Thinking beyond Initial Assumptions
The Wason selection task, or the Four Card Problem, immerses us in the intricacies of logical analysis and conditional reasoning. By identifying the two necessary cards to flip, the 28 and the B, we confront the task’s real challenge, and learn the importance of testing for falsification rather than confirmation.
The puzzle serves as a powerful reminder of the complexities that lie beneath seemingly simple tasks and the importance of careful analysis when engaging in logical problem-solving. It challenges us to think beyond initial assumptions and consider the logical implications hidden within the given information. As such, it is a clear reminder of the complexities hidden within seemingly straightforward problems and the value of meticulous analysis in navigating the world of logic.
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams, Chapman & Hall/CRC Press. 2024.
Card Counting: A Winning Strategy in Blackjack
In 1962, Ed Thorp introduced a strategy that would forever change the landscape of blackjack: card counting. His book, Beat the Dealer: A Winning Strategy for the Game of Twenty-One, presented a system based on probability theory that allowed players to gain an advantage over the house. Since then, card counting has become a topic of fascination for blackjack players worldwide.
Understanding the Basics of Blackjack
To grasp the significance of card counting, it’s essential to understand the fundamentals of blackjack. The basic objective of the game is simple: players aim to draw cards that beat the dealer’s hand without exceeding a total of 21. While basic strategy provides players with a foundation for optimal gameplay, card counting takes it a step further by incorporating the knowledge of which cards have already been dealt.
The Concept of Card Counting
Card counting revolves around the concept that certain cards have a different impact on the game’s outcome than others. By using a system to estimate the ratio of high and low cards still in the deck, the technique allows players to adjust their betting and playing decisions based on the remaining composition of the deck.
Popular Card Counting Systems
Several card counting systems have been developed over the years, each with its own approach to assigning values to the cards. Here are a few notable examples:
1. Hi-Lo Count: The Hi-Lo Count is one of the simplest and most popular card counting systems. It assigns a tag of +1 to low cards (2–6), a tag of 0 to neutral cards (7–9), and a tag of −1 to high cards (10-Ace). By maintaining a running count based on these tags, players can assess the overall composition of the remaining deck.
2. KO Count: The Knock-Out (KO) Count is another popular system. In this method, all 7s, 8s, and 9s are assigned a tag of +1, while 10s through Aces are assigned a tag of −1. The remaining cards are considered neutral (tag 0).
3. Hi-Opt Systems: Hi-Opt systems, such as the Hi-Opt I and Hi-Opt II, aim to provide a more accurate assessment of the deck’s composition by considering more card values.
4. Zen Count: The Zen Count system is known for its precision in tracking the deck’s composition. It assigns a variety of values to different cards, creating a more detailed count. This system, while more complex than the other systems, can offer a greater edge to skilled players.
Additional Considerations: It’s crucial to understand that these systems vary in complexity and suitability for different players. Advanced systems like the Zen Count may offer more accuracy, but they require more practice and skill. Additionally, systems may require converting the ‘running count’ into a ‘true count’ by accounting for the number of decks remaining in the shoe. This adjustment helps in accurately determining the player’s edge.
Making Informed Decisions
By monitoring the running count and employing the chosen card counting system, players can make in-running staking decisions. When the count indicates an abundance of high cards in the remaining deck or decks, this is generally good for the player, bad for the house. In this case, players may choose to increase the size of their bets. Conversely, when the count indicates a higher proportion of low cards remaining in the deck, players may opt for smaller bets and more conservative gameplay.
Challenges and Countermeasures
Casinos are well aware of card counting strategies and have implemented various countermeasures to detect and deter such activities. They employ techniques such as automatic shuffling machines, frequent deck changes, and trained personnel to identify suspected card counters. Consequently, players who employ card counting techniques also employ camouflage methods to avoid detection. This involves blending in with other players, varying bet sizes, acting like a casual player, and avoiding suspicious behaviour.
The Evolution of Card Counting
Over the years, card counting has evolved alongside advancements in technology and changes in casino practices. The rise of online blackjack games and continuous shuffling machines (CSMs) has posed new challenges for card counters. Online casinos employ random number generators (RNGs), making it impossible to track specific cards. CSMs continuously shuffle the cards, eliminating any opportunity to gain an advantage through card counting.
Conclusion: Beating the Odds
Card counting revolutionised the game of blackjack by providing players with a mathematical strategy to gain an edge over the house. However, it requires skill and practice to implement while evading detection. Still, card counting remains a challenging yet fascinating aspect of blackjack gameplay, and players can in principle adapt their techniques to the countermeasures employed by casinos. It continues to captivate players who seek to test their skills and beat the odds at the blackjack table.
A version of this article appears in TWISTED LOGIC: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams, Chapman & Hall/CRC Press. 2024.
Introduction
Born in the 5th century BC in Elea (a Greek colony in southern Italy), Zeno of Elea is one of the most intriguing figures in the field of philosophy. Zeno’s paradoxes are a set of problems generally involving distance or motion. While there are many paradoxes attributed to Zeno, the most famous ones revolve around motion and are extensively discussed by Aristotle in his work, ‘Physics’. These paradoxes include the Dichotomy paradox (that motion can never start), the Achilles and the Tortoise paradox (that a faster runner can never overtake a slower one), and the Arrow paradox (that an arrow in flight is always at rest). Through these paradoxes, Zeno sought to show that our common-sense understanding of motion and change was flawed and that reality was far more complex and counterintuitive.
The Achilles and the Tortoise paradox, as one example, uses a simple footrace to question our understanding of space, time, and motion. While it’s clear in real life that a faster runner can surpass a slower one given enough time, Zeno uses the race to craft an argument where Achilles, no matter how fast he runs, can never pass a tortoise that has a head start. This thought experiment forms a remarkable philosophical argument that challenges our perceptions of reality and creates a fascinating paradox that continues to engage scholars to this day.
These paradoxes might seem simple, but they invite us into deep philosophical waters, questioning our perception of reality and illustrating the complexity of concepts we take for granted like motion, time, and distance. In this way, Zeno’s contributions continue to have profound relevance in philosophical and scientific debates, encouraging us to critically explore the world around us.
The Paradox of the Tortoise and Achilles
In one version of this paradox, a tortoise is given a 100-metre head start in a race against the Greek hero Achilles. Despite Achilles moving faster than the tortoise, the paradox argues that Achilles can never overtake the tortoise. As Aristotle recounts it, ‘In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead’.
The Underlying Infinite Process
This paradox lies in the infinite process Zeno presents. When Achilles reaches the tortoise’s original position, the tortoise has already moved a bit further. By the time Achilles reaches this new position, the tortoise has again advanced. This sequence of Achilles reaching the tortoise’s previous position and the tortoise moving further seems to continue indefinitely, suggesting an infinite process without a final, finite step. Zeno argues that this eternal chasing renders Achilles incapable of ever catching the tortoise.
A Mathematical Solution to the Paradox
The resolution to Zeno’s paradox lies in the mathematical understanding of infinite series. Using a stylised scenario where Achilles is just twice as fast as the tortoise (it’s a very quick tortoise!), we define the total distance Achilles runs (S) as an infinite series: S = 1 (the head start of the tortoise) + 1/2 (the distance the tortoise travels while Achilles covers the head start) + 1/4 + 1/8 + 1/16 + 1/32 …
By mathematical properties of geometric series, this infinite series sums to a finite value. In other words, despite there being infinitely many terms, their sum is finite: S = 2. Hence, Achilles catches the tortoise after running 200 metres, demonstrating how an infinite process can indeed have a finite conclusion.
Philosophical Implications: Is an Infinite Process Truly Resolved?
Zeno’s paradoxes, while they might be resolved mathematically, open a Pandora’s box of philosophical questions, particularly concerning the nature of infinity and the real-world interpretation of mathematical abstractions. How can a seemingly infinite process with no apparent final step culminate in a finite outcome?
The Thomson’s Lamp thought experiment, proposed by philosopher James F. Thomson, provides an insightful analogy. Imagine you have a lamp that you can switch on and off at decreasing intervals: on after one minute, off after half a minute, on after a quarter minute, and so forth, with each interval being half the duration of the previous one. Mathematically, the total time taken for this infinite sequence of events is two minutes. However, a critical philosophical question emerges at the end of the two minutes: is the lamp in the on or off state?
This question is surprisingly complex. On the one hand, you might argue that the lamp must be in some state, either on or off. However, there is no finite time at which the final switch event takes place, given the infinite sequence of switching. Hence, the state of the lamp appears indeterminate, raising questions about the applicability of infinite processes in the physical world. More prosaically, of course, you may just have blown the bulb!
This conundrum mirrors the situation in Zeno’s paradox of Achilles and the Tortoise. Just as the state of Thomson’s Lamp after the two-minute mark seems ambiguous, so does the concept of Achilles catching the tortoise after an infinite number of stages. While mathematics gives us a definitive point at which Achilles overtakes the tortoise, the philosophical interpretation of reaching this point through an infinite process is not as clear-cut.
The Thomson’s Lamp thought experiment highlights that while we can use mathematical tools to deal with infinities, interpreting these results in our finite and discrete physical world can be philosophically challenging. It reminds us that philosophy and mathematics, while often harmonious, can sometimes offer different perspectives on complex concepts like infinity, sparking ongoing debates that fuel both fields.
Zeno’s Paradoxes, the Quantum World, and Relativity
Zeno’s paradoxes, which have puzzled thinkers for millennia, find surprise echoes in the realms of quantum mechanics and the theory of relativity, two foundational components of modern physics. Thse paradoxes, originally aimed at challenging the coherence of motion and time, intersect with quantum and relativistic concepts in thought-provoking ways.
In quantum mechanics, the principle of superpoition allows particles to exist in multiple states a once until observed. This phenomenon reflects the essence of Zeno’s Arrow Paradox, where an arrow in flight is paradoxically motionless at any instant. This comparison highlights how quantum theory disrupts traditional views on motion, suggesting that at a microscopic level, movement doesn’t conform to our standard or philosophical expectations.
Meanwhile, the theory of relativity introduces the conceot of time dilation, where times appears to ‘slow down’ for an object moving at speeds close to the speed of light. This idea provides a moden perspective on Zeno’s Dichotomy Paradox, which argues that motion is impossible due to the infinite divisibility of time and space. Through relativity, we see that motion and time are relative, not absolute, concepts – illustrating a deep connection to Zeno’s philosophical challenges, even after two millennia.
Conclusion: Philosophical Debate and Contemporary Relevance
Contemporary philosophers continue to grapple with Zeno’s paradoxes, not only as historical curiosities but also as fundamental challenges to our understanding of reality. These paradoxes force us to reconsider how we conceptualise time, space, and motion. They remind us that our intuitive grasp of the world is often at odds with its underlying complexities. In today’s world, where scientific and technological advancements continually push the boundaries of what we understand, Zeno’s paradoxes remain as relevant as ever, reminding us of the enduring power and limits of human reason and the ongoing journey to comprehend the universe in which we live.
