Skip to content

Base Rate Fallacy – in a nutshell.

The Base Rate Fallacy occurs when we disregard or undervalue prior information when making a judgment on how likely something is. In particular, if presented with related base rate information (i.e. generic, general information) and specific information (information pertaining only to a certain case), the fallacy arises from a tendency to and focus on the latter at the expense of the former.

For example, we are informed that someone is an avid book-lover, we might think it more likely that they are a librarian than a nurse. There are, however, many more nurses than librarians. In this example, we have not taken sufficient account of the base rate for the number of nurses relative to librarians.

Now consider testing for a medical condition, which affects 2% of the population. Assume there’s a test for this condition which will correctly identify them with this condition 95% of the time. If someone does not have the condition, the test will correctly identify them as being clear of this condition 80% of the time.

Now consider a test a random group of people. Of the 2% of patients who are suffering from the condition, 95% will be correctly diagnosed with the condition, whereas  of the 98% of patients who do not have the condition, 20% will be incorrectly diagnosed as having the condition (almost 20% of the population).

What this means is that of the 21.5% of the population (0.95 x 2% + 0.2 x 98%) who are diagnosed with the condition, slightly less than 2% (0.95 x 2% = 1.9%) actually are suffering from the condition, i.e. 8.8%.

Exercise

Consider testing for a medical condition, which affects 4% of the population. Assume there’s a test for this condition which will correctly identify them with this condition 90% of the time. If someone does not have the condition, the test will correctly identify them as being clear of this condition 90% of the time.

If someone tests positive for the condition, what is the probability that they have the condition?


Reading and Links

Base Rate Fallacy. In: Paradoxes of probability and other statistical strangeness. UTS, 5 April, 2017. S. Woodcock. http://newsroom.uts.edu.au/news/2017/04/paradoxes-probability-and-other-statistical-strangeness

https://www.cambridge.org/core/books/judgment-under-uncertainty/evidential-impact-of-base-rates/CC35C9E390727085713C4E6D0D1D4633

Base Rate Fallacy. Wikipedia. https://en.wikipedia.org/wiki/Base_rate_fallacy

Prediction markets and political forecasting – evidence to House of Lords

Professor Leighton Vaughan Williams – Written evidence (PPD0024)

Available at: http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/political-polling-and-digital-media-committee/political-polling-and-digital-media/written/72373.pdf

1. In this evidence, I consider the relationship between political betting and political opinion polls, and highlight peer-reviewed research I have undertaken into this. I also reference some other published work of mine on opinion polling and political forecasting more generally. Research I have undertaken into the impact of the dissemination of information via social media is also highlighted.
2. The recorded history of election betting markets can be traced as far back as 1868 for US presidential elections (Rhode and Strumpf, 2013) and 1503 for papal conclaves. Between 1868 and 2012, no clear favourite for the White House had lost the presidential election other than in 1948, when longshot Harry Truman defeated his Republican rival, Thomas Dewey. 2016 can be added to that list, following the defeat of strong favourite Hillary Clinton in the Electoral College.
3. The record of the betting markets in predicting the outcome of papal conclaves is somewhat more chequered and is considered in Vaughan Williams and Paton (2015) in which I examine, with my co-author Professor David Paton, the success of papal betting markets historically.
4. The potential of the betting markets and prediction markets (markets created specifically to provide forecasts) to assimilate collective knowledge and wisdom has increased in recent years as the volume of money wagered and number of market participants has soared. Betting exchanges alone now see tens of millions of pounds trading on a single election.
5. An argument made for the value of betting markets in predicting the probable outcome of elections is that the collective wisdom of many people is greater than that of the few. We might also expect that those who know more, and are better able to process the available information, would on average tend to bet more.
6. The lower the transaction costs (the betting public have not paid tax on their bets in the UK since 2001, and margins have fallen since the advent of betting exchanges) and the lower the costs of accessing and processing information (through the development of the Internet and search engines), the more efficient we might expect betting markets to become in translating information into forecasts. Modern betting markets might be expected for these reasons to provide better forecasts than ever.
7. There is plenty of anecdotal evidence about the accuracy of political betting markets, especially compared to the polls. The 1985 by-election in Brecon and Radnor is a classic example. On Election Day, July 4th, an opinion poll undertaken by the Mori polling organisation was published which gave Labour a commanding lead of 18 percent over the Liberal Alliance candidate. Ladbrokes simultaneously made the Liberal the 4/7 favourite. The Liberal won.
8. Forward 20 years to a BBC World Service live radio debate in 2005, in the run-up to the UK general election, when forecasts were swapped between the Mori representative and myself on the likely outcome of the election. I predicted a Labour majority of about 60, as I had done a few days earlier in the Economist magazine (Economist, April 14th, 2005) and on BBC Radio 4 Today (April, 18th, 2005), based on the betting at the time. The Mori representative predicted a Labour majority of over 100 based on their polling. The actual majority was 66.
9. More recent anecdotal evidence comes from the 2012 US presidential election. Barack Obama was the heavy favourite to win, while the average of the pollsters had the popular vote within 0.7%, and two leading polling organisations, Gallup and Rasmussen, had Mitt Romney ahead in final polls. Obama won by 3.9%.
10. During the later stages of the 2014 Scottish referendum campaign, the polling average had it relatively close (especially compared with the actual result), with more than one poll calling it for independence (one by 7%). The betting odds were always very strongly in favour of Scotland staying in the UK. The result echoed the 1995 Quebec separation referendum in Canada. There the final polling showed ‘Yes to separation’ with a six point lead. In the event, ‘No to separation’ won by one point. This late swing to the ‘status quo’ is credited by some with the confidence in the betting markets about a ‘NO’ outcome in Scotland.
11. In the 2015 general election in Israel, final polls showed Netanyahu’s Likud party trailing the main opposition party by 4% (Channel 2, Channel 10, Jerusalem Post), by 3% (Channel 1) and by 2% (Teleseker/Walla). Meanwhile, Israel’s Channel 2 television news on Election Day featured the odds on the online prediction market site, Predictwise. This gave Netanyahu an 80% chance of winning. The next day, Netanyahu declared that he had won “against the odds.” He actually won against the polls.
12. Polling averages during the 2015 UK general election campaign often showed Conservatives and Labour very close in terms of vote share. Meanwhile, the betting odds always had Conservative most seats as short odds-on. On the Monday before polling day, for example, the polling average had it essentially tied in terms of vote share, while Conservatives to win most seats was trading on the markets as short as 1/6.

13. For the 2015 Irish same-sex marriage referendum, the spread betting markets were offering a mid-point of 60% for YES to same-sex marriage, and 40% for NO. The average of the final opinion polls had YES on 71% and NO on 29%. The final result was 62%-38% for YES, much closer to the projection from the markets.
14. If this anecdotal evidence is correct, it is natural to ask why the betting markets outperform the opinion polls in terms of forecast accuracy. One obvious reason is that there is an asymmetry. People who bet in significant sums on an election outcome will usually have access to the polling evidence, while opinion polls do not take account of information contained in the betting odds (though the opinions expressed might). Sophisticated political bettors also take into account the past experience of how good different pollsters are, what tends to happen to those who are undecided when they actually vote, differential turnout of voters, what might drive the agenda between the dates of the polling surveys and election day itself, and so on. All of this can in principle be captured in the markets.
15. Pollsters, except perhaps with their final polls, tend to claim that they are not producing a forecast, but a snapshot of opinion. In contrast, the betting markets are generating odds about the final result. Moreover, the polls are used by those trading the markets to improve their forecasts, so they are a valuable input. But they are only one input. Those betting in the markets have access to much other information as well including, for example, informed political analysis, statistical modelling, focus groups and on-the-ground information including local canvass returns.
16. To test the reliability of the anecdotal evidence pointing to the superior forecasting performance of the betting markets over the polls, I collected vast data sets of every matched contract placed on two leading betting exchanges and from a dedicated prediction market for US elections since 2000. This was collected over 900 days before the 2008 election alone, and to indicate the size, a single data set was made up of 411,858 observations from one exchange alone for that year. Data was derived notably from presidential elections at national and state level, Senate elections, House elections and elections for Governor and Mayor. Democrat and Republican selection primaries were also included. Information was collected on the polling company, the length of time over which the poll was conducted, and the type of poll.
17. My co-author, Dr. James Reade, and I compared the betting over the entire period with the opinion polls published over that period, and also with expert opinion and a statistical model.
18. In a paper, titled ‘Forecasting Elections’ (Vaughan Williams and Reade, 2016b), published in the ‘Journal of Forecasting’ – see also Vaughan Williams and Reade, 2017, 2015), we specifically assessed opinion polls, prediction and betting markets, expert opinion and statistical modelling over this vast data set of elections in order to determine which performed better in terms of forecasting outcomes. We

considered accuracy, bias and precision over different time horizons before an election.
19. A very simple measure of accuracy is the percentage of correct forecasts, i.e. how often a forecast correctly predicts the election outcome.
20. A related but distinctly different concept to accuracy is unbiasedness. An unbiased vote share forecast is, on average, equal to the true vote share outcome. An unbiased probability forecast is also, on average, equal to the true probability that the candidate wins the election. Forecasts that are accurate can also be biased, provided the bias is in the correct direction. If polls are consistently upward biased for candidates that eventually win, then despite being biased they will be very accurate in predicting the outcome, whereas polls that are consistently downward biased for candidates that eventually win will be very inaccurate as well as biased
21. We also identified the precision of the forecasts, which relates to the spread of the forecasts.
22. We considered accuracy, bias and precision over different time horizons before an election. We found that the betting/prediction markets provided the most accurate and precise forecasts and were similar in terms of bias to opinion polls. We found that betting/prediction market forecasts also tended to improve as the elections approached, while we found evidence of opinion polls tending to perform worse.
23. In Brown, Reade and Vaughan Williams (2017), we examine the precise impact of the release of information from a leading opinion polling company on the political betting markets. To do this, we use an extensive data set of over 25 million contracts that records (anonymised) individual trader IDs for the buyers and sellers of the contracts and align this to the exact time of release of this information. We find that polling releases by this prominent opinion pollster quickly influences trading volumes and market prices, but that experienced and more aggressive liquidity-taking traders bide their time before entering the market after such news events. We find that the market prices are not at their most informative in the immediate aftermath of a poll release.
24. We also conducted research into the impact of breaking news on the markets, notably via social media and live blogging. In Vaughan Williams and Paton (2015) we use an extensive data set of contracts matched on a leading betting exchange specifically regarding the outcome of the 2013 papal election. We found that genuine information released on Twitter was not reflected in the betting markets, and was only very partially incorporated when published later on the live blog of a major British newspaper. One possible explanation is that the information was not believed as it related to a closed-door conclave (Vaughan Williams, 2015a, considers

closed door forecasting in another context). However, this finding was consistent in some respects with evidence in Vaughan Williams and Reade (2016a) about the limited impact on a leading betting exchange of major breaking news in a UK general election when released on Twitter, at least until the news was validated by traditional media.
25. In summary, the overwhelming consensus of evidence prior to the 2015 UK General Election pointed to the success of political betting markets in predicting the outcome of elections. In contrast, the 2015 UK General Election, the 2016 EU referendum in the UK, the 2016 US presidential election and the 2017 UK election, all produced results that were a shock to the great majority of pollsters as well as to the betting markets. In each case, the longshot outcome (Conservative overall majority, Brexit, Trump, No overall majority) prevailed.
26. There are various theories as to why the polls and markets broke down in these recent big votes. One theory is based on the simple laws of probability. An 80% favourite can be expected to lose one time in five, if the odds are correct. In the long run, according to this explanation, things should balance out.
27. A second theory to explain recent surprise results is that something fundamental has changed in the way that information contained in political betting markets is perceived and processed. One interpretation is that the widespread success of the betting markets in forecasting election outcomes, and the publicity that was given to this, turned them into an accepted measure of the state of a race, creating a perception which was difficult to shift in response to new information. To this extent, the market prices to some extent led opinion rather than simply reflecting it. From this perspective, the prices in the markets became somewhat sticky.
28. A third theory is that conventional patterns of voting broke down in 2015 and subsequently, primarily due to unprecedented differential voter turnout patterns across key demographics, which were not correctly modelled in most of the polling and which were not picked up by those trading the betting markets.
29. There are other theories, which may be linked to the above, including the impact of social media, and manipulation of this, on voter perceptions and voting patterns.
30. I explore how well the pollsters, ‘expert opinion’, modellers, prediction and betting markets performed in the 2017 UK general election in Vaughan Williams (2017a) – “Report card: how well did UK election forecasters perform this time?” and explore the polling failure in the 2015 UK general election in Vaughan Williams (2015b) – “Why the polls got it so wrong in the British election”, and some implications in a follow-up article (Vaughan Williams, 2015c).

31. I explore how well the pollsters, ‘expert opinion’, modellers, prediction and betting markets performed in the 2016 US presidential election in Vaughan Williams (2016) – “The madness of crowds, polls and experts confirmed by Trump victory”, and the implications of turnout projections for opinion polling in Vaughan Williams, 2017b – “Election pollsters put their methods to the test – and turnout is the key.”
References
BBC Radio 4 Today, Are betting markets a better guide to election results than opinion polls? April 18th, 2005, 0740. http://www.bbc.co.uk/radio4/today/listenagain/listenagain_20050418.shtml
Brown, A., Reade, J.J. and Vaughan Williams, L. (2017), ‘When are Prediction Market Prices Most Informative?’ Working Paper.
Economist, Punters v pollsters. Are betting markets a better guide to election results than opinion polls? April 14th, 2005. http://www.economist.com/node/3868824
Rhode, P.W. and Strumpf, K. (2013), ‘The Long History of Political Betting Markets: An International Perspective’, in: The Oxford Handbook of the Economics of Gambling, ed. L. Vaughan Williams and D. Siegel, 560-588.
Vaughan Williams, L. (2017a), ‘Report card: how well did UK election forecasters perform this time?’ The Conversation, June 10. http://theconversation.com/report-card-how-well- did-uk-election-forecasters-perform-this-time-79237
Vaughan Williams, L. (2017b), ‘Election pollsters put their methods to the test – and turnout is the key’, The Conversation, June 2. http://theconversation.com/election-pollsters-put- their-methods-to-the-test-and-turnout-is-the-key-78778
Vaughan Williams, L. (2016), ‘The madness of crowds, polls and experts confirmed by Trump victory’, The Conversation, November 9. http://theconversation.com/the-madness-of- crowds-polls-and-experts-confirmed-by-trump-victory-68547
Vaughan Williams, L. (2015a), ‘Forecasting the decisions of the US Supreme Court: lessons from the ‘affordable care act’ judgment,’ The Journal of Prediction Markets, 9 (2), 64-78.
Vaughan Williams, L. (2015b), ‘Why the polls got it so wrong in the British election’, The Conversation, May 8. http://theconversation.com/why-the-polls-got-it-so-wrong-in-the- british-election-41530
Vaughan Williams, L. (2015c), ‘How looking at bad polls can show Labour how to win the next election’, The Conversation, May 20. http://theconversation.com/how-looking-at-bad- polls-can-show-labour-how-to-win-the-next-election-42065
Vaughan Williams, L. and Paton, D. (2015), ‘Forecasting the Outcome of Closed-Door Decisions: Evidence from 500 Years of Betting on Papal Conclaves’, Journal of Forecasting, 34 (5), 391-404.

Vaughan Williams, L. and Reade, J.J. (2016a), ‘Prediction Markets, Social Media and Information Efficiency’, Kyklos, 69 (3), 518-556.
Vaughan Williams, L. and Reade, J.J. (2016b), ‘Forecasting Elections’, Journal of Forecasting, 35 (4), 308-328.
Vaughan Williams, L. and Reade, J.J. (2017), ‘Polls to Probabilities: Prediction Markets and Opinion Polls’, Working Paper.
Vaughan Williams, L. and Reade, J.J. (2015), ‘Prediction Markets and Polls as Election Forecasts’, Working Paper.
31 October 2017

Multiple Comparisons Fallacy – in a nutshell.

When looking at many variables, it is easy to overlook how many possible correlations that are being tested. Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a “discovery.” For example, with a thousand variables, there are almost half a million (1,000×999/2) potential pairs of variables that might appear correlated by chance alone. While each pair is extremely unlikely in itself to show dependence, from the half a million pairs, it is very possible that a large number will appear to be dependent.

Say, for example, more than 20 comparisons are made where there is a 95% confidence level for each. In this case, you may well get a false comparison by chance.  This becomes a fallacy when that false comparison is seen as significant rather than a statistical probability. This fallacy can be addressed by the use of more sophisticated statistical tests.

A classic example of the multiple comparisons fallacy is the Birthday Paradox. In a group of 23 people (assuming each of their birthdays is an independently chosen day of the year with all days equally likely), there is in fact greater than a 50 per cent chance that at least two of the group share the same birthday. This seems counter-intuitive, since it is rare to meet someone that shares a birthday. Indeed, if you select two random people, the chance that they share a birthday is about 1 in 365. With 23 people, however, there are 253 (23×22/2) pairs of people who might have a common birthday. So by looking across the whole group, we are checking whether any one of these 253 pairings, each of which independently has a tiny chance of coinciding, does indeed match. Because there are so many possibilities of a pair , it makes it more likely than not, statistically, for coincidental matches to arise. For a group of as 40 people, say, it is nearly nine times as likely that at least share a birthday than that they do not.

References and Links

Multiple Comparisons Fallacy. In: Paradoxes of Probability and other statistical strangeness. The Conversation. Woodcock, S. April 4, 2017. https://theconversation.com/paradoxes-of-probability-and-other-statistical-strangeness-74440

Multiple Comparisons Fallacy. Logically Fallacious. https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFallacies/130/Multiple-Comparisons-Fallacy

The Multiple Comparisons Fallacy. Fallacy Files. http://www.fallacyfiles.org/multcomp.html

The Misleading Effect of Noise: The Misleading Comparisons Problem. Koehrsen, W. Feb. 7, 2018. whttps://towardsdatascience.com/the-multiple-comparisons-problem-e5573e8b9578

Birthday Problem. Wikipedia. https://en.wikipedia.org/wiki/Birthday_problem

The Will Rogers Phenomenon – in a nutshell.

The Will Rogers Phenomenon occurs when transferring something from one group into another group raises the average of both groups, even though there has been no change in actual values. The name of the phenomenon is derived from a comment made by comedian Will Rogers that “when the Okies left Oklahoma and moved to California, they raised the average intelligence in both states”.

In moving a data point from one group into another, the Will Rogers phenomenon occurs if the point is below the average of the group it is leaving, but above the average of the one it is joining. In this case, the average of both groups will increase.

To take an example, consider six individuals, the life expectancy of whom is assessed in turn as 5, 15, 25, 35, 45 and 55.

The individuals with an assessed life expectancy of 5 and 15 years respectively have been diagnosed with a particular medical condition. Those with the assessed life expectancies of 25, 35, 45 and 55 have not. So the mean life expectancy of those with the diagnosed condition is 10 years and those without is 40 years.

If diagnostic medical science now improves such that the individual with the 25 year life expectancy is now identified as suffering from the medical condition (previously this diagnosis was missed), then the mean life expectancy within the group diagnosed with the condition increases from 10 years to 15 years (5+15+25, divided by three). Simultaneously, the mean life expectancy of those not diagnosed with the condition rises by 5 years, from 40 years to 45 years (35+ 45+55, divided by three).

So, by moving a data point from one group into the other (undiagnosed into diagnosed), the average of both groups has increased, despite there being no change in actual values. This is because the point is below the average of the group it is leaving (25, compared to a group average of 40), but above the average of the one it is joining (25, compared to a group average of 10).

Exercise

Take the following groups of data, A and B.

A={10, 20, 30, 40}
B={50, 60, 70, 80, 90}

The arithmetic mean of A is 25, and the arithmetic mean of B is 70.

Show how transferring one data point from B to A can increase the mean of both.

Now take the following example:

A={10, 30, 50, 70, 90, 110, 130}
B={60, 80, 100, 120, 140, 160, 180}

By moving the data point 100 from B to A, what happens to the arithmetic mean of A and of B?

To demonstrate the Will Rogers Phenomenon, does the element which is moved have to be the very lowest of its set or does it simply have to lie between the arithmetic means of the two sets?

References and Links

The Will Rogers Phenomenon. Simple City. Dec. 1, 2012. https://richardelwes.co.uk/2012/12/01/the-will-rogers-phenomenon/

Will Rogers Phenomenon. Stats Mini Blog. Nov. 21, 2014. https://blogs.bmj.com/adc/2014/11/21/statsminiblog-will-rogers-phenomenon/

The “Will Rogers Phenomenon” lets you save lives by doing nothing. https://io9.gizmodo.com/the-will-rogers-phenomenon-lets-you-save-lives-by-doi-1443177486

Will Rogers Phenomenon. In: Paradoxes of Probability and Other Statistical Strangeness. Stephen Woodcock. May 26, 2017. https://quillette.com/2017/05/26/paradoxes-probability-statistical-strangeness/

Will Rogers Phenomenon. Wikipedia. https://en.m.wikipedia.org/wiki/Will_Rogers_phenomenon

 

 

Home Advantage Bias – Guide Notes.

There are five influential articles that have been published since 1982 on the key source of home advantage. All are agreed.

Jack Dowie’s article in New Scientist was a seminal piece. Dowie distinguishes the three Fs  – fatigue, familiarity and fans, each of which might have contributed to home advantage.

Fatigue: In a sample of 40 years of data, Dowie looked for evidence that away teams’ performances drop off relative to home teams as the game progresses, as measured by the likelihood of scoring a goal at any given point during the course of the match. Away teams did score fewer goals, on average, than home teams, but this disparity got no worse as the game developed.

Familiarity: Is familiarity with the pitch a bonus for the home team? If this is a key factor, teams who are travelling from a similar pitch to the home team should be less disadvantaged than those who are travelling to a very different sort of pitch. One obvious way to test this is ask whether teams who play on relatively big pitches have a particular statistical advantage when laying host to visitors whose own home ground boasts a small pitch, and vice versa. In fact, home advantage seemed to remain constant whatever the relative pitch sizes of hosts and visitors.

Fans: Is it the absolute number of fans, or is it the relative number of home and away fans? The data showed that the advantage conferred by playing at home was significantly greater for games played in the lower divisions than in the top division, even though the absolute number of supporters was much smaller in these games. Moreover, the advantage was much less in ‘local derbies.’ The conclusion is that the balance of support is what matters at the ground.

Nevill, Balmer and Williams looked into this further in 2002, showing 40 qualified referees video footage of 47 tackles from a Premiership match. The referees were divided into two groups, half of whom were exposed to the original soundtrack, while the other half listened to a silent version of the match. Neither group had access to the original referee’s decision. In actual matches, about 60% of bookings points (10 for a yellow, 25 for a red) are awarded to the visiting team. Those referees who watched the original soundtrack were reluctant to penalise the home team, judging 15% fewer of the tackles by home players to be fouls as compared to those referees who watched the silent footage. So in the absence of crowd noise the officials were more even-handed between the home and away sides. The original referees’ decisions, however, more accurately mirrored the behaviour of those armchair referees who had access to sound. It is as if, to get the crowd off their back, they wave play on.

In ‘Scorecasting’, Moskowitz and Wertheim (2011) compile further data to test a variety of popular theories explaining home advantage. They argue that when athletes play at home, they don’t seem to hit or pitch better in baseball … or pass better in football. The crowd doesn’t appear to be helping the home team or harming the visitors. They also checked scheduling bias against the away team, concluding that while this explains some of the home-field advantage, particularly in college sports, it’s irrelevant in many sports.

Thomas Dohmen looked at home advantage in the Bundesliga, the premier football league in Germany. Dohmen found that home advantage was smaller in stadiums that happened to have a running track surrounding the soccer pitch, and larger in stadiums without a track. Why? Apparently, when the crowd sits closer to the field, the officials are more susceptible to getting caught up in the home-crowd emotion. The social atmosphere in the stadium, he argues, leads referees into favouritism despite the fact that being impartial is optimal for them in career terms.

Here is the take of Steven Levitt and Stephen Dubner. “It’s worth noting that a soccer referee has more latitude to influence a game’s outcome than officials in other sports, which helps explain why the home-field advantage is greater in soccer, around the world, than in any other pro sport … officials don’t consciously decide to give the home team an advantage – but rather, being social creatures (and human beings) like the rest of us, they assimilate the emotion of the home crowd and, once in a while, make a call that makes a whole lot of close-by, noisy people very happy.”

References and Links

Dohmen, T.J. (2008). The Influence of Social Forces: Evidence from the Behavior of Soccer Referees. Economic Inquiry, 46, 3, 411-424. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1465-7295.2007.00112.x

Dowie, J. Why Spain Should Win the World Cup, New Scientist, 1982, 94 (10), 693-695. https://books.google.co.uk/books?id=OFCXnqlSFKwC&pg=PA693&lpg=PA693&dq=why+spain+should+win+the+world+cup+dowie&source=bl&ots=YLnc7jJr9L&sig=ACfU3U0PEmuQAsgtRjXyo7J-1IDfmJ1VOg&hl=en&sa=X&ved=2ahUKEwjTqIjg28rhAhWBtXEKHRiXCZAQ6AEwDHoECAYQAQ#v=onepage&q=why%20spain%20should%20win%20the%20world%20cup%20dowie&f=false

Nevill, A.M., Balmer, N.J. and Williams, A.M. (2002), The influence of crowd noise and experience upon refereeing decisions in football, Psychology of Sport  and Exercise, 3 (4), 261-272. https://www.sciencedirect.com/science/article/pii/S1469029201000334

Moskowitz, T.J. and Wertheim, L.J. (2011), Scorecasting. Random House.

Levitt, S.D. and Dubner, S.J. (2015), ‘When to Rob a Bank’, Penguin Books, pp. 211-12.

The Martingale Betting System – in a nutshell.


The basis of the martingale betting system is a strategy in which the gambler doubles the bet, such as a coin toss, after every loss, so that the first win would recover all previous losses plus a profit equal to the original stake. The martingale strategy has been applied to roulette in particular, where the probability of hitting either red or black is near to 50 per cent.

Take the case of a gambler who wagers £2 on Heads, at even money, so profits by £2 if the coin lands Heads and loses £2 if it lands Tails. If he loses, he doubles the stake on the next bet, to £4, and wins £4 if it lands Heads, minus £2 lost on the first bet, securing a net profit over both bets of £2 (£4 – £2). If it lands Tails again, however, he is £6 down, so he doubles the stake in the next bet to £8. If it lands Heads he wins £8, minus £6 lost on the first two nets, securing a net profit over the three bets of £2 (£8 – £6). This can be generalized for any number of bets. Whenever he wins, the gambler secures a net profit over all bets of £2.

The strategy is essentially, therefore, one of chasing losses. In the above example, the loss after n losing rounds is equal to 2+2²+2³+…+ 2n

So the strategy is to bet in the next round 2+2²+2³+…+ 2n +2

In this way, the profit whenever the coin lands Heads is 2.

For a gambler with infinite wealth, and hence an infinite number of coin tosses to eventually generate heads, the martingale betting strategy has been interpreted as a sure win.

However, the gambler’s expected value remains zero (or less than zero) because the small probability of a very large loss exactly balances out the expected gain. In a casino, the expected value is in fact negative, due to the house edge. There is also conventionally a house limit on bet size.

The martingale strategy fails, therefore, whenever there is a limit on earnings or on bets or bet size, as is the case in the real world. It is only with infinite or boundless wealth, bet size and time that it could be argued that the martingale becomes a winning strategy.

Appendix

Probability of losing three fair coin tosses = 1/8

Probability of losing n times = 1/2n

Total loss with starting stake of 2, with 3 losses of coin toss = 2 + 4 + 8 = 14.

So martingale strategy suggests a bet of 14 + 2 = 16.

Loss after n losing rounds = 2 + 22 + … + 2n

So martingale bet = (2 + 22 + … + 2n) + 2 = 2n+1

This strategy always wins a net 2.

This strategy, of always betting to win more than lost so far, works in principle, regardless of the odds, or whether they are fair. If each bet has a 1 in 10 chance of success, for example, the probability of 12 successive losses is about 30%, but the martingale strategy is to bet to win more on the 13th coin toss than the sum of losses to that point.

This holds so long as there is no finite stopping point at which the next martingale bet is not available (such as a maximum bet limit) or can’t be afforded.

So, let us assume that everyone has some number of losses such that they don’t have enough money to pay a stake large enough for the next round that it would cover the sum of the losses to that point. Call this run of losses n.

n differs across people and could be very high or very low.

Probability of losing n times = 1/2n

Using a martingale +2 strategy, the player wins 2 if able to play on, and then wins.

So, the player wins 2 with a probability of (1-1/2n)

Total losses after n losing bets = (2 + 22 + … + 2n) = (2n+1 – 2)

Expected gain is equal to the probability of not folding times the gain plus the probability of folding times the loss.

Expectation = (1 – 1/2n) . 2 – 1/2n (2n+1 – 2)

= 2 – 2/2n – 2 + 2/2n = 0.

So the expected gain in a fair game for any finite number of bets is zero using the martingale system, but it is positive if the system can be played to infinity. The increment per round need not be 2, but could be any number, x. The net gain to a winning bet is this number, x.

The intuitive explanation for the zero expectation is that the player (take the simplest case of an increment per round of 2) wins a modest gain (2) with a very good probability (1 – 1/2n) but with a small probability (1/2n) makes a disastrous loss (2n+1 – 2).

More generally, for an increment of x:

Expectation = (1 – 1/xn) . x – 1/xn (xn+1 – x)

= x – x/xn – x + x/xn = 0.

The mathematical paradox remains. In the case where on the nth round, the bet is 2n, the martingale expectation = ½ x 2 + ¼ x 22 + 1/8 x 23 + … = 1 + 1 + 1 + 1 … ∞

Yet the actual expectation, when the odds are fair, in all realistic cases = 0.

If the odds are tilted against the bettor, so that for example the bettor wins less if a fair  coin lands Heads than he loses if it lands Tails, the expected gain in a finite series of coin tosses is less than zero, but the same principle applies.

Exercise

Show that the expected value of  martingale strategy in a fair game of heads/tails is zero. Show how this can be reconciled with the fact that whenever the player wins, the net overall profit to the player is positive.

References and Links

Martingale (betting system). Wikipedia. https://en.m.wikipedia.org/wiki/Martingale_(betting_system)

Card Counting – in a nutshell.


It is said that on returning from a day at the races, a certain Lord Falmouth was asked by a friend how he had fared.  “I’m quits on the day”, came the triumphant reply.  “You mean by that,” asked the friend, “that you are glad when you are quits?”   When Falmouth replied that indeed he was, his companion suggested that there was a far easier way of breaking even, and without the trouble or annoyance. “By not betting at all!”  The noble lord said that he had never looked at it like that and, according to legend, gave up betting from that very moment.

While this may well serve as a very instructive tale for many, Ed Thorpe, writing in 1962, took a rather different view. He had devised a strategy, based on probability theory, for consistently beating the house at Blackjack (or ‘21’). In his book, ‘Beat the Dealer: A Winning Strategy for the Game of Twenty – One’, Thorp presents the system. On the inside cover of the dust jacket he claims that “the player can gain and keep a decided advantage over the house by relying on the strategy”.

The basic rules of blackjack are simple. To win a round, the player has to draw cards to beat the dealer’s total and not exceed a total of 21. Because players have choices to make, most obviously as to whether to take another card or not, there is an optimal strategy for playing the game. The precise strategy depends on the house rules, but generally speaking it pays, for example, to hit (take another card) when the total of your cards is 14 and the dealer’s face-up card is 7 or higher. If the dealer’s face-up card is a 6 or lower, on the other hand, you should stand (decline another card). This is known as ‘basic strategy.’

While basic strategy will reduce the house edge, it is not enough to turn the edge in the player’s favour. That requires exploitation of the additional factor inherent in the tradition that the used cards are put to one side and not shuffled back into the deck. This means that by counting which cards have been removed from the deck, we can re-evaluate the probabilities of particular cards or card sizes being dealt moving forward. For example, a disproportionate number of high cards in the deck is good for the player, not least because in those situations where the rules dictate that the house is obliged to take a card, a plethora of remaining high cards increases the dealer’s probability of going bust (exceeding a total of 21).

Thorp’s genius was in devising a method of reducing this strategy to a few simple rules which could be understood, memorized and made operational by the average player in real time. As the book blurb puts it, “The presentation of the system lends itself readily to the rapid play normally encountered in the casinos.” Essentially, all that is needed is to attach a tag to specific cards (such as +1 or -1) and then add or subtract the tags as the cards are dealt.Depending on the net score in relation to the cards dealt, it is easy to see whether the edge iswith the house or the player. This system is called keeping a ‘running count.’

There are variations on this theme, but the core strategy and original insights hold. The problem simply changed to one familiar to many successful horse players, i.e. how to get your money on before being closed down.

References and Links

Card counting. Wikipedia. https://en.wikipedia.org/wiki/Card_counting

https://wizardofodds.com/games/blackjack/card-counting/introduction/

https://wizardofodds.com/games/blackjack/card-counting/high-low/

4-Deck to 8-Deck Blackjack Strategy. https://wizardofodds.com/games/blackjack/strategy/4-decks/

The Ace-Five Count. https://wizardofodds.com/games/blackjack/appendix/17/