The Japanese attack on Pearl Harbor in December, 1941, came as a surprise to the US intelligence community. The attack by al Qaeda on the World Trade Centre and on the Pentagon in September 2001, came as a similar surprise. But the information was out there. So why the surprise? Simply put, it was because the systems were not in place with which to put the jigsaw together in time. The 9/11 Commission Report stated the problem like this: “The biggest impediment to all-source analysis – to a greater likelihood of connecting the dots – is the human or systemic resistance to sharing information.” James Surowiecki, in his book, ‘The Wisdom of Crowds’, offers the following perspective on this failure: “What was missing in the intelligence community … was any real means of aggregating not just information but also judgements. In other words, there was no mechanism to tap into the collective wisdom of National Security nerds, CIA spooks, and FBI agents. There was decentralization but not aggregation …”
The question is whether the market can help achieve this. Some people within the US Department of Defence already thought so, and had been working on just such an idea for several months when al Qaeda struck. Indeed, in May 2001 the Defense Advanced Research Projects Agency (DARPA) had issued a call for proposals under the heading of ‘Electronics Market-Based Decision Support’ (later ‘Future Markets Applied to Prediction’ (FutureMAP). The remit prescribed for FutureMAP was to create market-based techniques for avoiding surprise and predicting future events. It was not long, however, before the media and key members of the political class, began to train their guns on the idea of such a market. After all, it isn’t difficult to portray what was devised as a policy analysis market as no more than a forum for eager traders to profit from death and destruction. The populist arguments won the day and DARPA was forced to cancel the project.
While most of the arguments against the market were specious, there was some genuine intellectual concern as to how effective it would be likely to be. In particular, Joseph Stiglitz, winner of the 2001 Nobel Prize for economics, argued in an article published in the Los Angeles Times on 31 July 2003 (‘Terrorism: There’s No Futures in It’), that the market would be too “thin” (i.e. there would be too little money traded in the market) for it to be a useful tool for predicting events meaningfully. His argument was based on work he had previously published showing that markets can never be perfectly efficient when information is costly to obtain. The cost of obtaining and processing this information is, by implication, likely to act as a significant disincentive particularly in the context of a thin market (and hence low rewards).
I am not so sure. Is it obviously the case that a properly constructed market, populated by suitably motivated (and perhaps screened) players can be viewed in this way? Well, the jury’s still out on this, and in the meantime so are the so-called ‘terrorism futures’. For how long, I wonder?
Links:
Is it possible to construct a mathematical model of the performance of a golfer, a boxer, a football team, a cricket team, a snooker player, a horse, a dog, or whatever, that would predict well enough to allow us to earn a systematic profit over time?
The first problem is that any sporting event is influenced by random factors, which statisticians call ‘noise.’ In any given situation, this noise can overwhelm the best model to generate an unexpected outcome. That’s the bad news. The good news is that these factors tend to balance out over time, so any properly devised forecasting model which takes into account those factors which are predictable has the potential to perform very well indeed. Such models are known in the trade as ‘fundamental’ handicapping strategies, because they are based on fundamental information about performance.
The question is whether such a system exists which can actually turn a profit, or indeed whether it has ever existed. Ask Bill Benter how he made his millions at the Hong Kong racetrack and you have your answer. Basically, he constructed a computer model designed to estimate current performance potential. This involved the investigation of variables and factors with potential predictive significance and the refining of these individually so as to maximize their predictive accuracy.
In doing so he employed state-of-the-art econometric forecasting techniques, which he was confident enough to summarize in a classic paper entitled ‘Computer based horse race handicapping and wagering systems: a report.’ The basic question he seeks to answer in that paper is whether it is possible to construct a forecasting model which can generate a systematic profit at the races. Ever the practitioner, he provides the answer by constructing just such a model.
His method is to identify each individual factor that could possibly predict the outcome of a race and then to whittle these down to the most reliable and effective. Once he had a model that worked on past data, he tested it ‘out-of-sample; i.e. on a large sample of further races.
Sometimes he found that a variable was useful in predicting race outcomes but he really couldn’t understand why that should be the case. In such circumstances, he decided the best policy was not to care. Faced with a choice between a profitable model that he couldn’t fully explain, and an unprofitable model that he understood perfectly, he chose the former. Ideally, of course, you would work with a model that both works and you can fully explain, but Benter’s bottom line is that if it works, it doesn’t need fixing.
So what is that bottom line? Well, Bill Benter is now a very rich man even by the standards of most very rich men, and he made it on the basis of a sophisticated forecasting model which overcame track takes of about 19 per cent. In a masterly piece of understatement he concluded in his paper that “… at least at some times at some tracks, a statistically derived fundamental handicapping model can achieve a significant positive expectation.” Significant indeed! Why not try it some time?
Prime Minister Harold Wilson used to say that there were few jobs more stressful and more precarious than that of a politician but that the post of football manager was certainly one of them. Well, opinions are divided about the benefits of sacking those who run the country, and it is difficult to devise a measure which all would agree on, but there is a great deal of published analysis available which allows us to judge the effect of a change of management in other fields.
A seminal study in this regard, published by Professors Lieberson and O’Connor, found little evidence of any link between changes of Chief Executive Officer (CEO) and subsequent movements in company performance indicators such as sales and profits.
Other studies have found, in contrast, that changes of top managers do tend to be followed by sharp improvements in performance, particularly in certain sectors, such as computer equipment manufacture. Some areas of activity do seem impervious to changes at the top, however, as witnessed by a study of the effect of changes of Methodist ministers on church attendance, membership and donations. There was no discernible effect.
There is also a well-established literature which considers the impact of management change on team performance in professional sport, and this can be sub-divided into three distinct theories. According to the “common sense” theory, when a team is under-performing the manager is replaced, and if a better manager takes over, performance should improve. In the “vicious circle” theory, poor performance tends to trigger managerial change, but the disruption caused tends to make things worse. The there is the “ritual scapegoating” theory, in which the appointment of a new manager makes no difference, on average, to team performance.
Rick Audas, Stephen Dobson and John Goddard disentangled the competing theories, for the case of English football, in an article published in the Journal of Economics and Business. A detailed examination of the results of their study reveals that on average it takes up to 16 matches for a team subject to a within-season change of manager to adapt to the usual changes of tactics and playing style which ensue, and even then the team’s win rate tends to revert only to where it was prior to the change. The transition period is, on average, simply a sink into which some of the points that would have been earned are emptied.
What these results appear to tell us, then, is that the rate at which managers are replaced in English football is not optimal. The turnover is simply too fast.
So in light of these findings, is it possible to offer a rational explanation for existing attitudes to management change? Well, there may just be such an explanation, and it lies in a thing called “variance”, which is the dispersion of results about the average.
The idea here is that a change of manager may not improve performance but it does shake things up a bit. This can only be good news, of course, for a team which is likely to go down anyway. The reason is that while a change of manager may on average mean even fewer points, the change does at least improve the small chance of pulling clear of the relegation zone. Seen like this, it can be likened to an all-or-nothing throw of the dice. That’s the rational side of the argument. The problem comes when those in charge of the team’s future grow just a little too fond of the dice.
Links:
http://www.jstor.org/pss/2392715
http://onlinelibrary.wiley.com/doi/10.1111/1468-0270.00039/abstract
http://thevideoanalyst.com/does-sacking-the-manager-improve-results
Traditional finance is more concerned with checking that the price of two 8-ounce bottles of ketchup is close to the price of one 16-ounce bottle than it is in understanding the price of the 16 ounce bottle. Such is the view of Lawrence Henry (‘Larry’) Summers, currently Director of the White House’s National Economic Council, writing in the ‘Journal of Finance’ in 1985. “They have shown”, he went on, “that two quart bottles of ketchup invariably sell for twice as much as one quart bottle of ketchup except for deviations traceable to transactions costs … Indeed, most ketchup economists regard the efficiency of the ketchup market as the best established fact in empirical economics.” If so, this represents an example of the LOOP (‘Law of One Price’) principle in economics, i.e. identical goods should have identical prices.
But are they right? To find out, I checked the prices on offer at my local branch of a well-known local supermarket chain and found the following pricing structure. A 460g bottle of a leading brand of tomato ketchup was priced at £1.63, while the bigger (by 73.9%) 800g bottle sold at £2.19 (an extra 34.4%). According to the LOOP principle, one might have thought that the 800g bottle would have sold for 73.9% more than the 460g bottle, i.e. for £2.83. So is this a mispricing of 64p. Does this indicate that the market is inefficient? Well, the answer is pretty simple here. There is nothing wrong with the market, since there’s no clear way to exploit the mispricing, short of tipping the contents of the bigger bottle into the smaller bottles and selling them yourself. Summers would call this a “deviation due to transactions costs.” More fundamentally, the smaller bottle offers advantages that the larger bottle doesn’t have. Most obviously, it’s easier to store. Perhaps it also looks nicer on the table.
Trading financial assets, on the other hand, is a different issue altogether. Transactions costs are relatively small and assets trading in different markets are often identical, so in these cases one would expect the LOOP principle to more clearly apply. What’s the evidence? Well, one well-known apparent violation is the case of Royal Dutch Shell. Royal Dutch and Shell are separate legal entities but merged their interests in 1907 on a 60/40 basis. On this basis, the Royal Dutch shares should automatically have been priced at 50% more than Shell shares. However, they diverged from this by up to 15% until their final merger in 2005.
When the company 3Com spun off shares of its mobile phone subsidiary Palm into a separate stock offering, 3 Com kept most of Palm’s shares for itself. (Thaler and Lamont, 2003). So a trader could invest in Palm simply by buying 3Com stock. 3Com stockholders were guaranteed to receive three shares in Palm for every two shares in 3Com that they held. This seemed to imply that Palm shares could trade at an absolute maximum of 2/3 of the value of 3Com shares. Rather than being worth less than 3Com shares, however, Palm shares instead traded at a higher price for a period of several months. This should have allowed an investor to make a guaranteed profit by buying 3Com shares and shorting Palm – a virtual no-risk arbitrage opportunity, the equivalent of exchanging, say, $1,000 for £600 in the UK and almost simultaneously exchanging the £600 for $1,500 in the US.
How about prediction markets (speculative markets used for making predictions)? Is it possible to buy low and sell high across different prediction markets? Seems so! For an example, we need only point to the 2008 and 2012 US Presidential elections when it was for several days possible to back John McCain and Mitt Romney on the Betfair betting exchange at a healthy shade of odds against and simultaneously to do likewise with Barack Obama on the Intrade exchange. A guaranteed profit, even net of commission.
Professor Eugene Fama once defined an efficient market as one in which “deviations from the extreme version of the efficiency hypothesis are within information and transactions costs.” On this basis, there would appear to be some evidence that markets (in particular respect of the ‘Law of One Price’) are not always efficient.
Reading and Links
Lamont., O.A. and Thaler, R.A. (2003). Anomalies: The Law of One Price in Financial Markets. Journal of Economic Perspectives. 17, 4, 191-202.
On economics and finance. Summers, L.H. (1985). Journal of Finance, 40, 3, 633-635. http://m.blog.hu/el/eltecon/file/summers_ketchup%5B1%5D.pdf
Law of One Price. Wikipedia. https://en.wikipedia.org/wiki/Law_of_one_price
It is said that on returning from a day at the races, a certain Lord Falmouth was asked by a friend how he had fared. “I’m quits on the day”, came the triumphant reply. “You mean by that,” asked the friend, “that you are glad when you are quits?” When the said Lord replied that indeed he was, his companion suggested that there was a far easier way of breaking even, and without the trouble or annoyance. “By not betting at all!” The noble lord said that he had never looked at it like that and, according to legend, gave up betting from that very moment.
While this may well serve as a very instructive tale for many, a certain Edward O. Thorp, writing in 1962, took a rather different view. He had devised a strategy, based on probability theory, for consistently beating the house at Blackjack (or ’21’). In his book, ‘Beat the Dealer: A Winning Strategy for the Game of Twenty-One’, Thorp presents the system. On the inside cover of the dust jacket he claims that “the player can gain and keep a decided advantage over the house by relying on the strategy”.
The basic rules of blackjack are simple. To win a round, the player has to draw cards to beat the dealer’s total and not exceed a total of 21.
Because players have choices to make, most obviously as to whether to take another card or not, there is an optimal strategy for playing the game. The precise strategy depends on the house rules, but generally speaking it pays, for example, to hit (take another card) when the total of your cards is 14 and the dealer’s face-up card is 7 or higher. If the dealer’s face-up card is a 6 or lower, on the other hand, you should stand (decline another card). This is known as ‘basic strategy.’
While basic strategy will reduce the house edge, it is generally not enough to turn the edge in the player’s favour. That requires exploitation of the additional factor inherent in the tradition that the used cards are put to one side and not shuffled back into the deck.
This means that by counting which cards have been removed from the deck, we can re-evaluate the probabilities of particular cards or card sizes being dealt moving forward. For example, a disproportionate number of high cards in the deck is good for the player, not least because in those situations where the rules dictate that the house is obliged to take a card, a plethora of remaining high cards increases the dealer’s probability of going bust (exceeding a total of 21).
Thorp’s genius was in devising a method of reducing this strategy to a few simple rules which could be understood, memorized and made operational by the average player in real time. As the book blurb puts it, “The presentation of the system lends itself readily to the rapid play normally encountered in the casinos.”
Since the publication of the book, the strategy has been amended and improved, but Ed Thorp’s original insights stand. The problem simply changed to one historically familiar to many successful horse players – how to get your money on before you are closed down or kicked out.
Follow on Twitter: @leightonvw
When you have the edge in a competition, it would seem that the rational strategy is to minimize the role of luck in determining the outcome. So if Roger Federer agrees to play one point against me at tennis, on his own serve, with a forfeit of £100,000 if he loses the point, I doubt that his optimal strategy would be to serve flat out. This would be to increase the risk that he double faults, which is by some margin my best hope of winning the point. In other words, variance is my friend but Mr. Federer’s enemy. The same would seem to apply in a horse race, where the jockey’s optimal strategy when aboard the best horse in the field is unlikely to be that of boxing himself in on the rail behind a wall of horses. When I talk about this, I like to use the example of Mick Kinane and his 2009 Prix de l’Arc de Triomphe triumph atop one of the best racehorses ever, Sea The Stars. Kinane put it like this: “I didn’t have any worries because I knew I was on the fastest horse in the race. His acceleration was fantastic.” If so, it is clear that he must have had nerves of steel, because there certainly wasn’t anyone else watching who shared that sentiment less than three furlongs out. To my mind, the horse won quite simply because he was so far superior to his rivals that he was able to overcome those obstacles which beset ordinary flesh and blood. And that includes variance! Did Kinane know that? Perhaps! Which brings us to another puzzle arising from the result of the race. Why were the bookmakers afterwards bemoaning a lost fortune on the race? According to a spokesman for Coral, for example, the success of Sea the Stars in the Arc cost bookmakers “a sizeable seven-figure sum.” It is a lament we hear so often. The bookmakers regularly decry a series of winning hot favourites. But why? If favourites are such losers in the book for the layers, why do the layers not simply shorten up the price of those particular horses or football teams or whatever, and lengthen the price of their rivals. In the case of the Arc, this would have meant cutting the odds offered about Sea the Stars by a couple of notches, or even more, and lengthening the others accordingly. Is it because the betting public would bet the same amount on the favourite regardless of the price? Unlikely! But even if true, the liability would in any case be less. So who is behaving irrationally? Is it the bookmakers or those who bet with them? And does the same apply on the betting exchanges? Anyway, do bookmakers actually lose when hot favourites win? Sometimes, but not always. The bottom line, though, is that bookmakers will on average do better when longshots win. But why? Of course, there will be fewer winners, but why is this not fully offset by the larger payouts? There are answers to this puzzle, based in the main around attitudes to risk and information misperceptions, but still no single definitive answer. That may well be because there actually is no definitive answer, but a range of them. The puzzle remains, therefore, but we are closer than ever to solving it.
The weight of opinion among the spokesmen of the major bookmakers, as reported on the morning of Epsom Derby Day, was that the John Oxx-trained ‘Sea the Stars’ would go off an even more sold favourite than he was in the early trading. And indeed, all the 7 to 2 soon disappeared, to be replaced on the bookmakers’ boards by 11 to 4, and by the time the market opened on course, that price (bar the odd 3 to 1 and 5 to 2 in places) was pretty much set in stone. Meanwhile, Criterium de Saint Cloud winner ‘Fame and Glory’, available at a general 4 to 1 in the morning, opened on course at 7 to 2, touched 4 to 1 in places, and after frantic late trading, went off as 9 to 4 favourite. What happened? Well, an enormous late plunge, including one confirmed wager of £40,000 to win £110,000 might have had something to do it! All those 7 to 2 and 4 to 1 offers were soon wiped off the boards and market-watchers who like to follow in those bettors who unload the biggest satchels might perhaps have been forgiven for thinking the horse was home and hosed before it even exited the stalls. In the event, the Montjeu colt performed creditably enough, and might well have benefited from a stiffer pace, but was never going to prevent Mick Kinane from following up his 2001 Derby success on Sea the Stars’ half-brother Galileo. So what can we learn from this? Well, the consistent money pointed firmly in the direction of Sea the Stars. The money for ‘Fame and Glory’ was late and big, but from what we can ascertain derived from a few very large individual punts. Still, money is money, and prices in a market respond to the weight of it, wherever it comes from. But live, flesh-and- blood price-setters need not respond solely to the sheer relative volume of money about different horses, but also to what information the money is imparting. Would you as a price-setter respond in the same way to ten bets of £4,000, placed gradually throughout the day, as you would to one £40,000 punt three minutes before the off? And should you? In the event, we know that the late and very large plunge came for the unbeaten colt that was already known to travel and to stay. And we were confirmed in our knowledge that he travels and stays. The only part of the triumvirate of qualities that wasn’t confirmed was his unbeaten status. If the market was like a ballot box in a first-past-the-post election, the winner of the 2009 Investec Derby and the winner in the market would, I judge, have been one and the same. But betting markets don’t work quite like ballot boxes. Most obviously, you can buy more than one vote. And so the market got it wrong and the ballot box (most probably) got it right. Would that the same were always true in the world of politics!
When all the votes cast in the 2008 US Presidential election were counted and tallied, it emerged that Barack Obama secured 52.9% of the popular vote, John McCain took 45.7% of the vote, and the remaining 1.4% was split between assorted third-party candidates. So how did the final polls published by the respective opinion polling organisations perform? RealClearPolitics published most of them, and displayed them on its website on election day. These ranged in sample size from just 714 likely voters (CBS News) to 3,000 (Rasmussen Reports), and covered survey dates ranging from as early as 29th October to 1st November (Pew Research) at one end to 3rd November only at the other (Marist). – Rasmussen Reports (3,000 likely voters): Obama 52%; McCain 46% – Pew Research (2,587 likely voters): Obama 52%; McCain 46% – Gallup (2,472 likely voters): Obama 55%; McCain 44% – ABC News/ Washington Post (2,470 likely voters): Obama 53%; McCain 44% This gives a big-sample average as follows: Obama 53%: McCain 45%. This is an 8% margin in favour of Obama compared to the actual margin of 7.2%. – Marist (3rd November): Obama 52%; McCain 43% – Battleground (Lake projection – 2/3 November): Obama 52%; McCain 47% – Battleground (Tarrance projection – 2/3 November): Obama 50%; McCain 48% Actually this is one poll divided into two according to different methodologies, so should be counted only once (an average of the methodologies gives: Obama 51%; McCain 47.5%). This gives a late-survey average as follows: Obama 51.5%; McCain 45.3%, a 6.2% margin in favour of Obama compared to the actual margin of 7.2% So we have something of an over-estimate in one case and an under-estimate in the other. There are a total of 14 polls (counting the alternative Battleground methodologies as one poll), ranging from highs of 55% (Gallup) and 54% (Reuters/C-SPAN/Zogby) for Obama to lows of 50% (Fox News) and 51% (NBC News/Wall Street Journal). For McCain the polls ranged from a high of 47.5% (Battleground average) to a low of 42% (CBS News). So what happens if we simply take all the final polls published on RealClearPolitics and divide by the number of polls? This is without regard to the dates of the surveys or the sample sizes or the methodology of the poll but simply taking a bare average of everything on offer. Well, we obtain the following: Obama 52.1% (actual: 52.9%); McCain 44.5% (actual: 45.7%). This represents an advantage of 7.6% for Obama (taking an unweighted, unadjusted average of all these polls), 0.4% off the actual margin in favour of Obama of 7.2%. Now add in the final Daily Kos/Research 2000 poll which RealClearPoltics for their own reasons decided to exclude from any of their daily summaries and what do we obtain? An Obama spread of 7.4%, within 0.2% of the final tally! And so there we have it! The election day polls performed almost perfectly – on average!
Link:
Undertaking an online search for dictionary definitions of the word ‘Wisdom’, I quickly came up with this: “The ability to use your experience and knowledge in order to make sensible decisions or judgments”. The second definition I hit upon gives the following: “The ability to discern or judge what is true, right, or lasting; insight”. There are plenty of dictionaries to skim, but these two definitions pretty much sum up what people usually mean when they use the word ‘Wisdom’. The two definitions I’ve highlighted are not identical, of course. It might, for example, be sensible from the point of a view of a jury wanting to go home to make a hasty and ill-considered decision about the defendant’s guilt, but does this make their action wise? The jury may indeed be acting wisely in their immediate personal interests but can we justifiably describe such behaviour as wise in a greater context? So when we say that a crowd is wise, what exactly are we saying and how does this fit in with the idea of a prediction market? A classic example is that of ‘Galton’s ox’, a seminal study in which Sir Francis Galton noted down the estimates of 800 or so entrants in a competition to guess the weight of an ox. He found that the average (mean) estimate of the crowd was almost exactly correct. Similar accuracy was reproduced in classic experiments in which students were asked to guess the number of jelly beans in a jar or the weight of a range of objects. Prediction markets are essentially betting markets created for the purpose of making predictions. The idea behind the use of these markets stems from the view that information concerning the likelihood of future events is dispersed among many people, i.e. the ‘crowd’, and that these markets allow for the aggregation of this information. Prediction markets have been used to forecast what sometimes seems like almost everything, from the outcome of elections to the timing of influenza outbreaks to sales of the latest printer to box office receipts, and have often proved astonishingly accurate. But what does this really tell us about the ‘crowd’ and the market which reflects the views of the crowd? James Surowiecki extols the virtues of prediction markets in his book, ‘The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations’. But does the success of prediction markets really tell us anything about the wisdom of the crowd? After all, there is a big difference between estimating the number of jelly beans in a jar and judging whether to put the defendant to death or go to war. At a recent seminar presented on December 7, 2009 at the University of Hamburg, Julie Vaughan Williams coined the phrase ‘The Accuracy of the Crowd’ as a better representation of what the empirical evidence tells us so far. As such, maybe we should see the crowd as clever rather than wise. To deduce something about its wisdom from its cleverness is perhaps a step too far. So let’s coin a new phrase. Let’s call it ‘The Cleverness of the Crowd’. Reference: Vaughan Williams, Leighton and Vaughan Williams, Julie, The Cleverness of Crowds, The Journal of Prediction Markets, Vol. 3, 3, December, 2009, 45-47. Link: http://www.ingentaconnect.com/content/ubpl/jpm/2009/00000003/00000003/art00004
“Would you like to open the box or take the money”? This was the standard line of the original ‘quiz inquisitor’, Michael Miles, on his show ‘Take Your Pick’, a popular favourite in the early days of commercial television. The idea was to present contestants with a choice between opening a box, which might contain anything from a car to a rotten tomato, and receiving a wad of notes in the hand. Many, but not most, chose the money. Now let’s change the format of the game a little and offer contestants a choice between three boxes, one of which contains a cheque for a million pounds and the other two which are empty. Let’s call the boxes “Gold, Silver and Lead”, the choice of caskets offered to Portia’s suitors in Shakespeare’s ‘The Merchant of Venice’. Let’s say that, like Bassanio, you choose the box made of lead. I am the quiz inquisitor on this occasion and I now open the box made of gold. It is empty. At this point I offer you the opportunity to stick with your original choice or to switch. What should you do, and does it matter? Basic intuition may tell you that there are two remaining boxes, of silver and lead, and so it should be an even chance of winning whichever of these boxes you select. If this line of reasoning is correct, it makes no difference to your probability of winning the prize whether you switch to the silver box or stay with your original choice of lead? But would this intuition be correct? To answer this we need to ask one simple question. Has any new information been introduced by my decision to open the gold box? This depends on whether I know which box contains the cheque when I open the box. To take an example, assume I know that the box with the prize is the silver box. When you choose the lead box, I now have no choice but to open the gold box. In effect, then, I am actually pointing out to you which box contains the prize. This is the case whenever the box you have chosen is empty. There is a 2 in 3 chance of this as there are two empty boxes and only one containing the prize. There is a 1 in 3 chance that the box you have chosen is in fact the box containing the cheque. In this case, it doesn’t matter which box I as the quizmaster choose to open as they are both empty. To summarize this, there is a 2 in 3 chance that I am pointing out to you the winning box. It is the box which I chose not to open, in this example the silver box. There is a 1 in 3 chance that you chose the right box in the first place. So what’s your optimal strategy? This becomes clear once you realize that you only have a 1 in 3 chance of winning if you stick with your original box but a 2 in 3 chance if you switch to the box which I didn’t open. The key to the riddle is the new information I introduced by opening the box which I knew to be empty. By acting on this new information, you can improve your chance of correctly predicting which box will open to reveal the cheque from 1 in 3 to 2 in 3 – by switching boxes when given the chance.
