Skip to content

The Magic Money Paradox

Further and deeper exploration of paradoxes and challenges of intuition and logic can be found in my recently published book, Probability, Choice and Reason.

The Two Envelopes Problem, also known as the Exchange Paradox, is quite simple to state. You are handed two identical-looking envelopes, one of which, you are informed, contains twice as much money as the other. You are asked to select one of the envelopes. Before opening it, you are given the opportunity, if you wish, to switch it for the other envelope. Once you have decided whether to keep the original envelope or switch to the other envelope, you are allowed to open the envelope and keep the money inside. Should you switch?

Switching does seem like a no-brainer. Note that one of the envelopes (you don’t know which) contains twice as much as the other. So, if one of the envelopes, for example, contains £100, the other envelope will contain either £200 or £50. By switching, it seems, you stand to gain £100 or lose £50, with equal likelihood. So the expected gain from the switch is 1/2 (£100) + 1/2 (-£50) = £50-£25 = £25.

Looked at another way, the expected value of the money in the other envelope = 1/2 (£200) + 1/2 (£50) = £125, compared to £100 from sticking with the original envelope.

More generally, you might reason, if X is the amount of money in the selected envelope, the expected value of the money in the other envelope = 1/2 (2X)+ 1/2 (X/2) = 5/4 X. Since this is greater than X, it seems like a good idea to switch.

Is this right? Should you always switch?

Solution (Spoiler Alert)

If the above logic is correct, then after switching envelopes the amount of money contained in the other envelope can be denoted as Y.

So by switching back, the expected value of the money in the original envelope = 1/2 (2Y) + 1/2 (Y/2) = 5/4Y, which is greater than Y, following the same reasoning as before.  So you should switch back.

But following the same logic, you should switch back again, and so on, indefinitely.

This would be a perpetual money-making machine. Something is surely wrong here.

One way to consider the question is to note that the total amount in both envelopes is a constant, A = 3X, with X in one envelope and 2X in the other.

If you select the envelope containing X first, you gain 2X-X = X by switching envelopes.

If you select the envelope containing 2X first, you lose 2X-X = X by switching envelopes. So your expected gain from switching = 1/2 (X) + 1/2 (-X) = 1/2 (X-X) = 0.

Looked at another way, the expected value for the originally selected envelope = 1/2 (2X) + 1/2 X = 3/2 X. The expected value for the envelope you switch to = 1/2 (2X) = 1/2 X = 3/2 X. These amounts are identical, so there is no expected gain (or loss) from switching.

So which is right? This reasoning or the original reasoning. There does not seem a flaw in either. In fact, there is a flaw in the earlier reasoning, which indicated that switching was the better option. So what is the flaw?

The flaw is in the way that the switching argument is framed, and it is contained in the possible amounts that could be found in the two envelopes. As framed in the original argument for switching, the amount could be £100, £200 or £50. More generally, there could be £X, £2X or £1/2 X in the envelopes. But we know that there are only two envelopes, so there can only be two amounts in these envelopes, not three.

You can frame this as £X and £2X or as £1/2 X and £X, but not legitimately as £X, £2X and £1/2 X. By framing it is as two amounts of money, not three, in the two envelopes, you derive the answer that there is no expected gain (or loss) from switching.

If you frame it as £X and £2X, there is a 0.5 chance you will get the envelope with £X, so by switching there is a 0.5 chance you will get the envelope with £2X, i.e. a gain of £X. Similarly, there is a 0.5 chance you selected the envelope with £2X, in which case switching will lose you £x. So the expected gain from switching is 0.5 (£X) + 0.5 (-£X) = £0.

If you frame it as £X and £1/2 X, there is a 0.5 chance you will get the envelope with £X, so by switching there is a 0.5 chance you will get the envelope with £1/2 X, i.e. a loss of £1/2 X. Similarly, there is a 0.5 chance you selected the envelope with £1/2 X, in which case switching will gain you £1/2 X. So the expected gain from switching is 0.5 (-£1/2 X) + 0.5 (£1/2 X) = £0.

There is demonstrably no expected gain (or loss) from switching envelopes.

In order to resolve the paradox, you must label the envelopes before you make your choice, not after. So envelope 1 is labelled, say, A, and envelope 2 is labelled, say, B. A corresponds in advance to, say, £100 and B corresponds in advance to, say, £200, or to £50, but not both. You don’t know which corresponds to which. If you choose one of these envelopes, the envelope marked in advance with the other letter will contain an equal amount more or less than the one you have selected. So there is no advantage (or disadvantage) in switching in terms of expected value. In summary, the clue to resolving the paradox lies in the fact that there are only two envelopes and these contain two amounts of money, not three.

The Three Prisoners Problem

First posed in Scientific American in 1959, the Three Prisoners Problem remains a classic of conditional probability. The problem, or a version of it, is simple to state. There are three prisoners on death row, Adam, Bob and Charlie. They are told that each of them has had their names entered into a hat and the lucky name to be randomly chosen will be pardoned as an act of clemency to celebrate the King’s birthday. The warden knows who has been pardoned, but none of the prisoners do.

Adam asks the warden to name one of the prisoners who will definitely NOT be pardoned. Either way, he agrees that his own fate should not be revealed. If Bob is to be spared, name Charlie as one of the men to be executed. If Charlie is to be spared, name Bob as one of the men to be executed. If it is he, Adam, who is to be pardoned, the warden should just flip a coin and name either Bob or Charlie as one of the men to be executed.

The warden agrees and names Charlie as one of the men going to the gallows.

Given this information, what is the probability that Adam is going to be pardoned, and what is the chance that Bob will instead be pardoned?

Adam reasons that his chance of being spared before the conversation with the warden was 1/3, as there are three prisoners, and only one of these will be pardoned by random lot. Now, though, he reasons that one of either he or Bob is to walk free, as he knows that Charlie is not the lucky one. So now Adam reasons that his chance of being pardoned has risen to 1/2.  But is he right?

 

Solution (Spoiler alert).

Before talking to the warden, Adam correctly concludes that his chance of evading the gallows is 1/3. It is either he, Bob or Charlie who will be released, and each has an equal chance, so each has a 1/3 chance of being pardoned.

When Adam asks the warden to name one of the OTHER men who will be executed, he is asking the warden not to name him either way, whether he is to be pardoned or not. The warden (as we are told in the question) selects which of the other men to name by flipping a coin. Now, Adam gains no new information about his fate. The information he does gain is about the fate of Bob and Charlie. By naming Charlie as the condemned man, the warden is ruling out the chance that Charlie is to be pardoned.

So Adam now knows the chance that Charlie will be spared has decreased from a 1/3 chance before the warden revealed this information to a zero chance after he reveals it.

But his own chance of being spared remains unchanged, because the warden was not able to reveal any new information relevant to his own fate. New information is a requirement for changing the probability that something will happen or not. So his probability of being pardoned remains at 1/3.

The new information he does have is that Charlie is not the lucky man, so the chance that Bob gets lucky is 2/3.

Put another way, how is it possible that Adam and Bob heard received the same information but their odds of surviving are so different? It is because, when the warden made his selection, he would never have declared that Adam was going to die. On the other hand, he might well have declared Bob to be the condemned man. In fact, there was a 50-50 chance he would have done so. Therefore, the fact that he didn’t name Bob provides valuable information as to the likelihood that Bob was pardoned while telling us nothing as to whether Adam was.

This is an example of the reality that belief updates must depend not merely on the facts observed but also on the method of establishing those facts.

In case there is still any doubt, imagine that there were 26 prisoners instead of 3. Adam asks the warden not to reveal his own fate but to name in random order 24 of the other prisoners who are to be executed. So what is the chance that Bob will be the lucky one of 26 before the warden reveals any names? It is 1/26, the same chance as each of the other prisoners. Every time, however, that the warden names a dead man walking, say Charlie or Daniel, that reduces their chances to zero and increases the chance of all those left except for Adam, who has expressly asked not be named, regardless of whether he is to be executed. So it means a lot to learn that the warden has eliminated everyone but Bob given that he had every opportunity to name Bob as one of those going to the gallows. It means nothing that he has not named Adam because he was expressly told not to, whatever his fate.

In a 26-man line-up, where the warden in random order names who are condemned, once everyone but Bob has been named for execution by the warden, Adam’s chance of surviving stays at 1/26. Bob’s chance of being pardoned rises to 25/26. This is despite the fact that there are only two remaining prisoners who have not been named for execution by the warden. Would you take 20/1 now that Adam will be spared? You might, if you were Bob, but you are not getting a good price, and you will not have long to spend it!

 

What’s in the box? Betting on the Bertrand’s Box paradox.

You are presented with three identical boxes. You are made aware that one of the boxes contains two gold coins, another contains two silver coins, and the third contains one gold coin and one silver coin. You do not know which box contains which.

Now, choose a box at random. Reach without looking under the cloth covering the coins and take out one of the coins. Now you can look. It is gold.

So you can be sure that the box you chose cannot be the box containing the two silver coins. It must be either the box containing two gold coins or the box containing one gold coin and one silver coin.

Withdrawing the gold coin from the box doesn’t provide you with the information to identify which of these two boxes it is. So the other coin must either be a gold coin or a silver coin.

Given what you now know, what is the probability the the other coin in the box is also gold, and what odds would you take to bet on it?

This is essentially the so-called ‘Bertrand’s Box’ paradox, first proposed by Joseph Bertrand in 1889 in his opus, ‘Calcul des probabilités’.

 

Spoiler alert (Solution)

After withdrawing the gold coin, there are only two boxes left. One is the box containing the two gold coins and the other is the box containing one gold and one silver coin. It seems intuitively clear that each of these boxes is equally likely to be the one you chose at random, and that therefore the chance it is the box with two gold coins is 1/2, and the chance that it is the box containing one gold and one silver coin is also 1/2. Therefore, the probability that the other coin is gold must be 1/2.

This sounds right, but it is in fact the wrong answer.

In fact, there are three equally likely scenarios that might have led to you choosing that shiny gold coin.

Let us separately label all the coins in the boxes to make this clear.

In the box containing two gold coins, there will be Gold Coin 1 and Gold Coin 2. These are both gold coins but they are distinct, different coins.

In the box containing the gold and silver coins, we have Gold Coin 3,which is a different coin to Gold Coin 1 and Gold Coin 2. There is also what we might label Silver Coin 3 in the box with Gold Coin 3. This silver coin is distinct and different to what we might label Silver Coin 1 and Silver Coin 2, which are in the box containing two silver coins, which was not selected.

So here are the equally likely scenarios when you withdrew a gold coin from the box.

a. You chose Gold Coin 1.

b. You chose Gold Coin 2.

c. You chose Gold Coin 3.

You do not know which of these gold coins you withdrew from the box.

If it was Gold Coin 1, the other coin in the box is also gold.

If it was Gold Coin 2, the other coin in the box is also gold.

If it was Gold Coin 3, the other coin in the box is silver.

Each of these possible scenarios is equally likely (i.e. each has a probability of being the true state of the world of 1/3), so the  probability that the other coin is gold is 2/3 and the probability that the other coin is silver is 1/3. So, if you are offered even money about the other coin being gold, the edge is very much with you.

Before withdrawing the gold coin, the chance that the box you had selected was that containing two gold coins was 1/3. By revealing the gold coin, however, you not only excluded the box containing two silver coins but also introduced the new  information that you could potentially have chosen a silver coin (if the selected box was that containing one gold and one silver coin) but in fact did not. That made it more likely (twice as likely) that the box you withdrew the gold coin from was that containing the two gold coins than the box containing one gold and one silver coin.

And that is the solution to the Bertrand’s Box paradox.

 

 

Election 2017: How well did the pollsters and pundits do?

When Theresa May announced on April 18 that she would call a snap general election, most commentators viewed the precise outcome of the vote as little more than a formality. The Conservatives were sailing more than 20% ahead of the Labour party in a number of opinion polls, and most expected them to be swept back into power with a hefty majority.
Even after a campaign blighted by manifesto problems and two terrorist attacks, the Conservatives were by election day still comfortably ahead in most polls and in the betting markets. According to the spread betting markets, they were heading for an overall majority north of 70 seats, while a number of forecasting methodologies projected that Jeremy Corbyn’s Labour could end up with fewer than 210.

In particular, an analysis of the favourite in each of the seats traded on the Betfair market gave the Tories 366 seats and Labour 208. The Predictwise betting aggregation site gave the Conservatives an 81% chance of securing an overall majority of seats, in line with the large sums of money trading on the Betfair exchange.

The PredictIt prediction market, meanwhile, estimated just a 15% chance that the Tories would secure 329 or fewer seats in the House of Commons (with 326 technically required for a majority), while the Oddschecker odds comparison site rated a “hung parliament” result an 11/2 chance (an implied probability of 15.4%). Only the Almanis crowd forecasting platform expressed any real doubt, putting the chance of a Conservative overall majority at a relatively paltry 62%.
In reality, the Conservative party lost more than a dozen seats net, ending up with 318 – eight short of a majority. Labour secured 262 seats, the Scottish National party 35, and the Liberal Democrats 12. Their projected vote shares are 42.4%, 40%, 3% and 7.9% respectively.
So did the opinion polls do any better than the betting markets? With the odd exception, no.

In their final published polls, ICM put the Tories on 46%, up 12% on Labour. ComRes predicted the Tories would score 44% with a 10-point lead. BMG Research was even further out, putting the Conservatives on 46% and a full 13% clear of Labour. YouGov put the Tories seven points clear of Labour (though their constituency-level model did a lot better), as did Opinium; Ipsos MORI and Panelbase had them eight points clear on 44%.

Other polls were at least in the ballpark. Kantar Public put the Tories 5% ahead of Labour, and SurveyMonkey (for the Sun) called the gap at 4%. Survation, the firm closest to the final result in their unpublished 2015 poll, this time put the Conservatives on 42% and Labour on 40%, very close to the actual result. Qriously (for Wired)was the only pollster to put Labour ahead, by three points.

According to the Chris Hanretty 2017 UK Parliamentary Election Forecast polling model, the Conservatives were heading for 366 seats, Labour 207, and the Liberal Democrats seven. Allowing for statistical uncertainty, the projection was of an “almost certain” overall majority for the Conservatives. The probability of a hung parliament was put at just 3%. All very bad misses.

Many others were wrong, too. The 2017 General Election Combined Forecast, which aggregates betting markets and polling models, forecast a Conservative majority of 66 seats. Other “expert” forecasts came from Britain Elects (Tories 356 seats, Labour 219 seats), Ashcroft (363, 217), Electoral Calculus (358, 218), Matt Singh (374, 207), Nigel Marriott (375, 202), Election Data (387, 186), Michael Thrasher (349, 215), Iain Dale (392, 163) and Andreas Murr and his colleagues (361, 236).
So what went wrong?

In the wake of the 2015 election, the Brexit referendum and Donald Trump’s victory, forecasters are getting used to fielding that question. But the answer isn’t that difficult: the problem is in quantifying the key factor in the common forecasting meltdown in advance. That factor is turnout, and notably relative turnout by different demographics.

In the Brexit referendum and 2016 US presidential election, turnout by poorer and less educated voters, especially outside urban areas, hit unprecedentedly high levels, as people who had never voted before (and may never vote again) came out in droves. In both cases, forecasters’ pre-vote turnout models had predicted that these voters wouldn’t show up in nearly the numbers they did.
In the 2017 election, it was turnout among the young in particular that rocketed. This time the factor was widely expected to matter, and indeed get-out-the-vote campaigns aimed at the young were based on it. But most polling models failed to properly account for it, and that meant their predictions were wrong.

Polling is a moving target, and the spoils go to those who are most adept at taking and changing aim. So will the lesson be learned for next time? Possibly. But next time, under-25s might not turn out in anything like the same numbers – or a different demographic altogether might surprise everyone. We might not have long to wait to find out.

 

References:

Leighton Vaughan Williams. Report card: How well did UK election forecasters perform this time?  Article in The Conversation. Link below:

https://theconversation.com/report-card-how-well-did-uk-election-forecasters-perform-this-time-79237

The Nash Equilibrium: Snappy Slides

Game Theory_YT1

Can Game Theory teach us how to play the game of life?

If there is a set of ‘game’ strategies with the property that no ‘player’ can benefit by changing their strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute what is known as the ‘Nash equilibrium’.

This leads us to the classic ‘Prisoner’s Dilemma’ problem. In this scenario, two prisoners, linked to the same crime, are offered a discount on their prison terms for confessing if the other prisoner continues to deny it, in which case the other prisoner will receive a much stiffer sentence. However, they will both be better off if both deny the crime than if both confess to it. The problem each faces is that they can’t communicate and strike an enforceable deal. The box diagram below shows an example of the Prisoner’s Dilemma in action.

Prisoner 2 Confesses Prisoner 2 Denies
Prisoner 1 Confesses 2 years each Freedom for P1; 8 years for P2
Prisoner 1 Denies 8 years for P1; Freedom for P2 1  year each

The Nash Equilibrium is for both to confess, in which case they will both receive 2 years. But this is not the outcome they would have chosen if they could have agreed in advance to a mutually enforceable deal. In that case they would have chosen a scenario where both denied the crime and received 1 year each.

So a Nash equilibrium is a stable state that involves interacting participants in which none can gain by a change of strategy as long as the other participants remain unchanged. It is not necessarily the best outcome for the parties involved, but it is the outcome we would most likely predict.

The Prisoner’s Dilemma is a one-stage game, however. What happens in games with more than one round, where players can learn from the previous moves of the other players?

Take the case of a 2-round game. The payoff from the game will equal the sum of payoffs from both moves.

The game starts with two players, each of whom is given £100 to place into a pot. They can then secretly choose to honour the deal or to cheat on the deal, by means of giving an envelope to the host containing the card ‘Honour’ or ‘Cheat’.  If they both choose to ‘Honour’ the deal, an additional £100 is added to the pot, yielding each an additional £50. So they end up with £150 each. But if one honours the deal and the other cheats on the deal, the ‘Cheat’ wins the original pot (£200) and the ‘Honour’ player loses all the money in that round.  A third outcome is that both players choose to ‘Cheat’, in which case each keeps the original £100. So in this round, the dominant strategy for each player (assuming no further rounds) is to ‘Cheat’, as this yields a higher payoff if the opponent ‘Honours’ the deal (£200 instead of £150) and a higher payoff if the opponent ‘Cheats’ (£100 instead of zero). The negotiated, mutually enforceable outcome, on the other hand, would be to agree to both ‘Honour’ the deal and go away with £150.

But how does this change in a 2-round game.

Actually, it makes no difference. In this scenario, the next round is the final round, in which you may as well ‘Cheat’ as there are no future rounds to realise the benefit of any goodwill realised from honouring the deal. Your opponent knows this, so you can assume your opponent who wishes to maximise his total payoff, will be hostile on the second move. He will assume the same about you.

Since you will both ‘Cheat’ on the second and final move, why be friendly on the first move?

So the dominant strategy is to ‘Cheat’ on the first round.

What if there are three rounds? The same applies. You know that your opponent will ‘Cheat’ on the final round and therefore the penultimate round as well. So your dominant strategy is to ‘Cheat’ on the first round, the second round and the final round. The same goes for your opponent. And so on. In any finite, pre-determined number of rounds, the dominant strategy in any round is to ‘Cheat.’

But what if the game involves an indeterminate number of moves? Suppose that after each move, you roll two dice. If you get a double-six, the game ends. Any other combination of numbers, play another round. Keep playing until you get a double-six. Your score for the game is the sum of your payoffs.

This sort of game in fact mirrors many real-world situations. In real life, you often don’t know when the game will end.

What is the best strategy in repeated play? For the game outlined above, we shall denote ‘Honour the deal’ as a ‘Friendly’ move and ‘Cheat’ as a hostile move. But the notion of a Friendly or Hostile approach can adopt other guises in different games.

There are seven proposed strategies here.

  1. Always Friendly. Be friendly every time
  2. Always Hostile. Be hostile every time
  3. Retaliate. Be Friendly as long as your opponent is Friendly but if your opponent is ever Hostile, you be Hostile from that point on.
  4. Tit for tat. Be Friendly on the first move. Thereafter, do whatever your opponent did on the previous move.
  5. Random. On each move, toss a coin. If Heads, be Friendly. If tails, be Hostile.
  6. Alternate. Be Friendly on even-numbered moves, and Hostile on odd-numbered moves, or vice-versa.
  7. Fraction. Be Friendly on the first move. Thereafter, be Friendly if the fraction of times your opponent has been Friendly until that point is less than a half. Be Hostile if it is less than or equal to a half.

Which of these is the dominant strategy in this game of iterated play? Actually, there is no dominant strategy in an iterated game, but which strategy actually wins if every strategy plays every other strategy.

‘Always Hostile’ does best against ‘Always Friendly’ because every time you are Friendly against an ‘Always Hostile’, you are punished with the ‘sucker’ payoff.

‘Always Friendly’ does best against Retaliation, because the extra payoff you get from a Hostile move is eventually negated by the Retaliation.

Thus even the choice of whether to be Friendly or Hostile on the first move depends on the opponent’s strategy.

For every two distinct strategies, A and B, there is a strategy C against which A does better than B, and a strategy D against which B does better than A.

So which strategy wins when every strategy plays every other strategy in a tournament? This has been computer simulated many times. And the winner is Tit for Tat.

It’s true that Tit for Tat can never get a higher score than a particular opponent, but it wins tournaments where each strategy plays every other strategy. In particular, it does well against Friendly strategies, while it is not exploited by Hostile strategies. So you can trust Tit for Tat. It won’t take advantage of another strategy. Tit for Tat and its opponents both do best when both are Friendly. Look at this way. There are two reasons for a player to be unilaterally hostile, i.e. to take advantage of an opponent or to avoid being taken advantage of by an opponent. Tit for Tat eliminates the reasons for being Hostile.

What accounts for Tit for Tat’s success, therefore, is its combination of being nice, retaliatory, forgiving and clear.

In other words, success in an evolutionary ‘game’ is correlated with the following characteristics:

Be willing to be nice: cooperate, never be the first to defect.

Don’t be played for a sucker: return defection for defection, cooperation for cooperation.

Don’t be envious: focus on how well you are doing, as opposed to ensuring you are doing better than everyone else.

Be forgiving if someone is willing to change their ways and co-operate with you. Don’t bear grudges for old actions.

Don’t be too clever or too tricky. Clarity is essential for others to cooperate with you.

As Robert Axelrod, who pioneered this area of game theory in his book, ‘The Evolution of Cooperation’: Tit for Tat’s “niceness prevents it from getting into unnecessary trouble. Its retaliation discourages the other side from persisting whenever defection is tried. Its forgiveness helps restore mutual cooperation. And its clarity makes it intelligible to the other player, thereby eliciting long-term cooperation.”

How about the bigger picture? Can Tit for Tat perhaps teach us a lesson in how to play the game of life? Yes, in my view it probably can.

 

Further Reading and Links

Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books

Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group

Axelrod, R. and Hamilton, W.D. (1981), The Evolution of Cooperation, Science, 211, 1390-96. http://www-personal.umich.edu/~axe/research/Axelrod%20and%20Hamilton%20EC%201981.pdf

https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation

What should Messi do? Applying Game Theory to Penalty Kicks.

The El Clasico game between Real Madrid and Barcelona is in the 23rd minute at the Santiago Bernabeu when Lionel Messi is brought down in the penalty box and rewarded with a spot kick against the custodian of the Los Blancos net, Keylor Navas.

Messi knows from the team statistician that if he aims straight and the goalkeeper stands still, his chance of scoring is just 30%. But if he aims straight and Navas dives to one corner, his chance of converting the penalty rises to 90%.

On the other hand, if Messi aims at a corner and the goalkeeper stands still, his chance of scoring is a solid 80%, while it falls to 50% if the goalkeeper dives to a corner.

We are here simplifying the choices to two distinct options, for the sake of simplicity and clarity.

Navas also knows from his team statistician that if he dives to one corner and Messi aims straight, his chance of saving is just 10%. But if he stands still and Messi aims at one corner, his chance of saving the penalty rises to 50%.

On the other hand, if Navas stands still and Messi aims at a corner, his chance of making the save is just 20%, while it rises to 70% if Messi aims straight.

So this is the payoff matrix, so to speak, facing Messi as he weighs up his decision.

Goalkeeper – Stands Still Goalkeeper – dive to one corner
Lionel Messi – Aims straight 30% 90%
Lionel Messi – Aims at corner 80% 50%

 

So what should he do? Aim straight or to a corner. And what should Navas do? Stand still or dive?

Here is the payoff matrix facing Navas.

Messi – Aims straight Messi – Aims at a corner
Navas – Stands still 70% 20%
Navas – Dives to one corner 10% 50%

 

Game theory can help here.

Neither player has what is called a dominant strategy in game-theoretic terms, i.e. a strategy that is better than the other, no matter what the opponent does. The optimal strategy will depend on what the opponent’s strategy is.

In such a situation, game theory indicates that both players should mix their strategies, in Messi’s case aiming for the corner with a two-thirds chance, while the goalkeeper should dive with a 5/9 chance.

These figures are derived by finding the ratio where the chance of scoring (or saving) is the same, whichever of the two tactics the other player uses.

 The Proof

Suppose the goalkeeper opts to stand still, then Messi’s chance (if he aims for the corner 2/3 of the time) = 1/3 x 30% + 2/3 x 80% = 10% + 53.3% = 63.3%

If the goalkeeper opts to dive, Messi’s chance = 1/3 x 90% + 2/3 x 50% = 30% + 33.3% = 63.3%

Adopting this mixed strategy (aim for the corner 2/3 of the time and shoot straight 1/3 of the time), the chance of scoring is therefore the same. This is the ideal mixed strategy, according to standard game theory.

From the point of view of Navas, on the other hand, if Messi aims straight, his  chance of saving the penalty kick (if he dives 5/9 of the time) = 5/9 x 10% + 4/9 x 70% = 5.6% + 31.1% = 36.7%

If Messi opts to aim for the corner, Navas’ chance = 5/9 x 50% + 4/9 x 20% = 27.8% + 8.9% = 36.7%

Adopting this mixed strategy (dive for the corner 5/9 of the time and stand still 4/9 of the time), the chance of scoring is therefore the same. This is the ideal mixed strategy, according to standard game theory.

The chances of Messi scoring and Navas making the save in each case add up to 100%, which cross-checks the calculations.

Of course, if the striker or the goalkeeper gives away real new information about what he will do, then each of them can adjust tactics and increase their chance of scoring or saving.

To properly operationalise a mixed strategy requires one extra element, and that is the ability to truly randomise the choices, so that Messi actually does have exactly a 2/3 chance of aiming for the corner, and Navas actually does have a 5/9 chance of diving for the corner. There are different ways of achieving this. One method of achieving a 2/3 ratio is  to roll a die and go for the corner if it comes up 1, 2, 3 or 4, and aim straight if it comes up 5 or 6. Or perhaps not! But you get the idea.

For the record, Messi aimed at the left corner, Navas guessed correctly and got an outstretched hand to it, pushing it back into play. Leo stepped forward deftly to score the rebound. Cristiano Ronaldo equalised from the spot eight minutes later. And that’s how it ended at the Bernabeu. Real Madrid 1 Barcelona 1. Honours even in El Clasico.

 

Appendix

Messi’s strategy

x = chance that Messi should aim at corner

y = chance that Messi should aim straight

So,

80x + 30y (if Navas stands still) = 50x + 90y (if Navas dives)

x + y = 1

So,

30x = 60y

30x = 60 (1-x)

90x = 60

x = 2/3

y=1/3

 

Navas’ strategy

x = chance that Navas should dive to corner

y  = chance that Navas should stand still

So,

10x + 70y (if Messi aims straight) = 50x + 20y (if Messi aims at corner)

x+y = 1

So,

10x + 70y = 50x + 20y

40x = 50y

40x = 50(1-x)

90x = 50

x = 5/9

y = 4/9

 

 

The Super Bowl Stock Market Indicator – Guide Notes.

“Few prediction schemes have been more accurate, and at the same time more perplexing, than the Super Bowl Stock Market Predictor, which asserts that the league affiliation of the Super Bowl winner predicts stock market direction. In this study, the authors examine the record and statistical significance of this anomaly and demonstrate that an investor would have clearly outperformed the market by reacting to Super Bowl game outcomes.” Thus read the abstract to a paper published in 1990 by Thomas Krueger and William Kennedy in the very well regarded Journal of Finance.

“If the Super Bowl is won by a team from the old National Football League (now the NFC, or National Football Conference),” they wrote, “then the stock market is very likely to finish the year higher than it began. On the other hand, if the game is won by a team from the old American Football League (now the AFC, or American Football Conference), the market will finish lower than it began.”

It is important to note, though, that some AFC teams count as NFL wins because they originated in the old NFL, i.e. Pittsburgh Steelers, Baltimore Ravens (formerly Cleveland Browns, Baltimore/Indianapolis Colts).

Over the 22-year history of the Super Bowl to the date of submission of their study in 1988, they documented a 91% accuracy rate for their predictor.

What happened in 1989? The NFC team, San Francisco 49ers, beat the AFCs Cincinnati Bengals the stock market rose 27%.

Further confirmation of an idea first proposed by New York Times sportswriter Leonard Koppett, published as The Super Bowl Predictor by investment advisor Robert H. Stovall in the January 1988 issue of Financial World.

So what happened in 1990?  Well, the NFC’s San Francisco 49ers won a second consecutive victory, beating the AFC’s Denver Broncos, by 55 points to 10. But the stock market fell in 1990, by 4.3%.

But then the Super Bowl Predictor returned to form, correctly predicting the direction of the stock market in 1991, 1992, 1993, 1994, 1995, 1996, 1997. Since the launch of the Super Bowl that made for 28 correct predictions out of 31 (a success rate of 90.3%).

Since then, the Super Bowl Predictor has had a much more chequered record. predicted correctly only about half the time since 1997. In 2009, Robert Stovell, a strategist for Wood Asset Management in Sarasota, Florida, and an early champion of the Stock Market Indicator wrote: Nothing seems to be working anymore {in the stock market]. Used to be, I was only happy when it was over 90% (accurate), and when it was still above 80% I was pleased. But certainly 79% is still far above a failing grade. (quoted on January 12, 2009, in MarketBeat (WSJ.coms inside look at the markets).

Prior to Super Bowl 2017, the Predictor had called it right since then five times (2010, 2011, 2012, 2014 and 2015) and wrong twice (2013 and 2016). Over the whole run of Super Bowls, the indicator had been right a total of 40 times out of 50, as measured by the S&P 500 index. That year the AFCs New England Patriots stormed from 25 points behind at one point in the game to beat the Atlanta Falcons by 34 points to 28 in overtime. It should have presaged a bad year for the stock markets, but in fact the markets climbed. They should also have climbed following the 2018 victory of the NFC’s Philadelphia Eagles over the Patriots, but the reverse happened. So the indicator, as of Super Bowl 2019, had been right 40 times out 52, with a failing record for each of the previous three years.

For those still retaining some faith in the indicator, and wanting to see a good year ahead for the stock market, the team to cheer for in 2019 was the LA Rams, of the NFC. Having said that, their opponents, the AFC’s New England Patriots, won the 2017 Super Bowl, and it presaged a good year on the markets. On the betting markets, the Patriots were the marginal favourites to win in 2019 and triumphed by 13 points to 3. We now wait to see what 2019 brings.

So is the Super Bowl Indicator a real forecasting tool, or is it simply descriptive of what has happened rather than containing any predictive value?

You decide!

 

Exercise

Do you consider that the Super Bowl Indicator has any value as a stock market predictor?

 

Reading and Links

Krueger, T.M. and Kennedy, W.F. (1990), An Examination of the Super Bowl Stock Market Predictor, Journal of Finance, 1990, 45 (2), 691-697. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-6261.1990.tb03712.x

Schmidt, B. and Clayton, R. Super Bowl Indicator and Equity Markets: Correlation not Causation 2017). Journal of Business Inquiry, 17, 2, 97-103. http://journals.uvu.edu/index.php/jbi/article/download/235/208

The Efficiency of Markets – Guide Notes

Further and deeper exploration of paradoxes and challenges of intuition and logic can be found in my recently published book, Probability, Choice and Reason.

The Efficient Market Hypothesis (EMH), in its strictest form, holds that market prices or odds reflect all known information.

Prices or odds may change when new information is released, but this new information is unpredictable.

So the best estimate of the price or odds likely to prevail at any point in the future is the price now. This is dismal, if true, because it would mean that it is not possible to beat the market, except by chance. But this can’t be true, or else it creates a paradox. If the market was always efficient, traders would have no economic incentive to acquire information, since information acquisition and processing is not a costless activity and would add nothing to what can be obtained by simply looking at current market prices.

It has been mathematically proved (Grossman and Stiglitz, 1980, American Economic Review) that when information is not costless to obtain or process, asset prices can never fully reflect all the information available to traders. So in the real world, markets are not completely efficient. They cannot be. This result is a relief, at least in principle, to those seeking to beat the market.  

The equilibrium proposed by Grossman and Stiglitz is one in which some profits are available to some investors.

Essentially, rational, ‘informed’ traders will seek to acquire and process new information whenever the benefits of doing so are greater then the costs. Up to the point, economists say, where the marginal costs equal the marginal benefits of obtaining and processing information.

But to the extent that trading is a zero-sum game, or worse, as most betting markets are, winners need losers.

So who are the winners and who are the losers?

To take the example of a poker game, good players need weak players in the game. In the financial literature these ‘weak players’ are known as ‘noise (or ‘uninformed’) traders.’ Noise makes trading in financial markets possible, and thus allows us to observe prices for financial assets. But noise also causes markets to be somewhat inefficient.

Imagine a world with no noise traders, no information costs, no trading costs –  an efficient market, a market in which it would be irrational to place any trades, a market without traders, a strange kind of world. So the Efficient Market hypothesis in its strictest form cannot be true. So it is possible in principle to beat the market. How might we do this? By a ‘technical’ strategy which uses information contained in past and present prices or odds. Or by a ‘fundamental’ strategy which uses information about real variables, such as form. Or by some combination of these. Those practical matters can be examined at another time. Here we are looking exclusively at whether markets are informationally inefficient as a matter of principle, in a world of positive information costs, or indeed transaction costs, and we can conclude that the answer is Yes.

There is another systematic reason why markets might be inefficient in a broader sense, and that is the existence of asymmetric information. A notable case of this is called ‘adverse selection’, which refers to a situation in which the buyer or seller of a product knows something about the product quality or condition that the other party does not know, allowing them to have a better estimate of what the true cost of the product should be. This can lead to the breakdown of a market in which it exists. George Akerlof’s seminal article (‘The Market for Lemons’), published in 1970 in the Quarterly Journal of Economics, which examined the problem of adverse selection on the market for used cars has important implications for any market characterised by adverse selection.

Here is the problem. If Mr. Smith wants to SELL me his horse, do I really WANT to buy it? It’s a question as old as markets and horses have existed, but it was for many, many years, one of the unspoken questions of economics. So how do we solve this paradox? For most of the history of economics, the answer was quite simple. Simply assume perfect markets and perfect information, so the horse buyer would know everything about the horse, and so would the seller, and in those cases where the horse is worth more to the buyer than the seller, both can strike a mutually beneficial deal. There’s a term for this: ‘gains from trade’.

In the real world, the person selling the horse is likely to know rather more about it than the potential purchaser. This is called ‘asymmetric information’, and the buyer is facing what is called an ‘adverse selection’ problem, as he has adverse information relative to the seller. Akerlof had become intrigued by the way in which economists were limited by their assumption of well-functioning markets characterised by perfect information. For example, the conventional wisdom was that unemployment was simply caused by money wages adjusting too slowly to changes in the supply and demand for labour. This was the so-called ‘neo-classical synthesis’ and it assumed classic markets, albeit they could be a bit slow to work.

At the same time, economists had come to doubt that changes in the availability of capital and labour could in themselves explain economic growth. The role of education was called upon as a sort of magic bullet to explain why an economy grew as fast as it did. But how can we distinguish the impact on productivity of the education itself from the extent to which education simply helped grade people? The idea here is that more able people will tend on average to seek out more education. So how far does education contribute to growth, and how far is it simply a signal and a screen for employers? In the real world, of course, these signals could be useful because employers are like the horse buyers – they know less about the potential employees than the employees know about themselves, the classic adverse selection problem.

Akerlof turned to the used car market for the answer, not least because at the time a major factor in the business cycle was the big fluctuation in sales of new cars. Just like in the market for horses, the first thing a potential used car buyer is likely to ask is “Why should I WANT to buy that used car if he wants so much to SELL it to me”. The suspicion is that the car is what Americans call a ‘lemon’, a sub-standard pick of the crop. Owners of better quality used cars, called ‘plums’, are much less likely to want to sell.

Now let’s say that you’re willing to spend £10,000 on a plum but only £5,000 on a lemon. In such a case, the best price you’d be willing to pay is about £7,500, and only then if you thought there was an equal chance of a lemon and a plum. At this price, though, sellers of the plums will tend to back out, but sellers of the troublesome lemons will be very happy to accept your offer.

But as a buyer you know this, so will not be willing to pay £7,500 for what is very likely to be a lemon. The prices that will be offered in this scenario may well spiral down to £5,000 and only the worst used cars will be bought and sold. The bad lemons have effectively driven out the good plums, and buyers will start buying new cars instead of plums. Just as with horses, asymmetric and imperfect information in the used car market has the potential, therefore, to severely compromise its effective operation.

We can assume that the demand for used cars depends most strongly on two variables – the price of the car and the average quality of used cars traded. Both the supply of used cars and the average quality will depend upon the price. In equilibrium, the supply equals the demand for the given average quality. As the price declines, normally the quality will also fall. And it’s quite possible that no cars will be traded at any price.

This same idea shows the problem of medical insurance. In a free market for medical insurance, people above a certain age, for example, will have great difficulty in buying medical insurance. So why doesn’t the price rise to match the risk? The answer is that as the price level rises the people who insure themselves will be those who are increasingly certain that they will need the insurance. In consequence, the average medical condition of insurance applicants deteriorates as the price level rises – such that no insurance sales [for these age groups] may take place at any price. This is strictly analogous to the car case, where the average quality of used cars supplied fell with a corresponding fall in the price level. The principle of ‘adverse selection’ is potentially present in all lines of insurance. Adverse selection can arise whenever those seeking insurance have freedom to buy or not to buy, to choose the insurance plan, and to continue or discontinue as a policy holder.

There are ways to counteract the effects of quality uncertainty, such as guarantees on consumer durables. Brand names perform a complementary function. Brand names not only indicate quality but also give the consumer a means of retaliation if the quality does not meet expectations. Chains – such as hotel chains or restaurant chains – are similar to brand names. Licensing practices also reduce quality uncertainty. And education and labour markets themselves have their own ‘brand names.’

So one of the big problems that confront markets is the fact that some of the participants often don’t know certain things that others in the market do know.  This includes the market for most consumer durables, virtually all jobs markets, many financial markets, etc. In these cases, one of the roles of economics is to ask what system of incentives is most likely to address this problem of imperfect and asymmetric information. In economics, signalling is the idea that one party (termed the agent) credibly conveys some information about itself to another party (the principal). Signals should be distinguished from what have been called ‘indices’ ( a term coined by Robert Jervis in his 1968 PhD thesis). Indices are attributes over which one has no control. Think of these as generally unalterable attributes of something or someone. Signals are things that are visible and that are in part designed to communicate. In a sense, they are alterable attributes. So employees send a signal about their ability level to the employer by acquiring certain education credentials. The informational value of the credential comes from the fact that the employer assumes it is positively correlated with having greater ability.

Education credentials can be used as a signal to the firm, indicating a certain level of ability that the individual may possess; thereby narrowing the informational gap. In a seminal article on signalling, published in 1973 by Michael Spence, he proposes the key assumption that good-type employees pay less for one unit of education than bad-type employees. In Spence’s model it is optimal for the higher ability person to obtain the credential (the observable signal) but not for the lower ability individual. The premise for the model is that a person of high ability has a lower cost for obtaining a given level of education than does a person of lower ability.  Cost can be in terms of tuition costs, or intangible costs, such as stress and time and effort in obtaining the qualification. Thus, if both individuals act rationally it is optimal for the higher ability person to obtain the qualification but not for the lower ability person so long as the employers respond to the signal correctly. This will result in the workers self-sorting into the two groups. For this to work, it must be excessively costly, or impossible, to project a false image. The basic argument follows from the intuition that a behaviour that costs nothing can be equally well taken by anyone and so provides no information. It follows that perceivers should focus on behaviour which is costly to undertake. Signalling is an action by a party with good information that is confined to situations of asymmetric information.

The concept of screening should be distinguished from signalling, the latter implying that the informed agent moves first. When there is asymmetric information in the market, screening can involve incentives that encourage the better informed to self-select or self-reveal.

Joseph Stiglitz pioneered the theory of screening, examining how a less informed party can induce the other party to reveal their information. They can provide a menu of choices in such a way that the optimal choice of the other party depends on their private information. For example, a theme park might offer a menu of gold and silver tickets, where the more expensive gold ticket allows the customer to avoid the queue at rides. This will induce the customers to self-sort and reveal genuine information as to the value they place on their time and their desire to avoid the queues.

So can markets be efficient? In the strictest informational sense, the answer is No. But there are ways in which they can be made more efficient in the broader sense of the term than they would be in their natural state.

Reading and Links

Missing Markets: Insurance and Lemons. CORE. https://core-econ.org/the-economy/book/text/12.html#126-missing-markets-insurance-and-lemons

The Efficient Market Hypothesis and Its Critics. Burton Malkiel. 2003.

Click to access Efficient%20Market%20Hypothesis%20and%20its%20Critics%20-%20Malkiel.pdf

Akerlof, G. (1970), The Market for Lemons: Quality, Uncertainty and the Market Mechanism. Quarterly Journal of Economics. 84:488-500.

Grossman, S.J. and Stiglitz, J. (1980), The Impossibility of Informationally Efficient Markets, American Economic Review, June, 393-408.

Jervis, R. Signaling and Perception, in Kristen Monroe, ed., Political Psychology (Earlbaum, 2002).

Spence, M. (1973). “Job Market Signaling”. Quarterly Journal of Economics (The Quarterly Journal of Economics, 87 (3): 355–374.

Joseph E. Stiglitz, 1975. “The Theory of ‘Screening’, Education, and the Distribution of Income,” American Economic Review, 65(3), pp. 283–300.

Joseph E. Stiglitz, 1981. “Information and the Change in the Paradigm of Economics”, Nobel Prize Lecture, December 8.

A. Michael Spence, 1981. “Signaling in Retrospect and the Informational Structure of Markets”, Nobel Prize Lecture, December 8.

George A. Akerlof, “Behavioral Macroeconomics and Macroeconomic Behavior”, Nobel Prize Lecture, December 8.

 

 

Managing and beating the line in betting markets: a primer

The ‘over-round’

In a two-horse race, if both horses have an equal chance of winning (objectively), and both are offered at evens, then the expected profit of the market-maker (and of the bettor) is zero, ignoring operating, information and transactions costs.

In a two-horse race, if both are offered at evens (regardless of the respective probabilities of victory of the two horses), then it would require a stake of £x (split equally between the two horses) to be sure of being returned that £x (a net profit of zero) whichever horse wins.  In this circumstance, the over-round of the bookmaker is said to be 100%, i.e. a notional profit margin of zero.

In practice, even if the notional profit margin is zero, the bookmaker is at a disadvantage if the horses are not equally matched, as a sophisticated bettor can take advantage by staking more than half on the horse with the greater chance of winning.

More generally, the over-round does not yield an accurate indicator of the bookmaker’s profit margin if bettors do not stake across all options in such a way as to ensure that their total stake of £x yields a certain return of £x, factored by the over-round.

For example, if the over-round is 120%, the notional margin to the bookmaker is 20%, and put simply bettors would have to stake £120 to ensure a return of £100.  Say, for instance, that both horses in a 2-horse race are being offered at 4 to 6.  Then the bettor would need to stake £60 on each (£120 in total) to be guaranteed a return of £100 (£40 plus the £60 stake returned) whichever horse won.  In such circumstances, the bookmaker is guaranteed at 20% profit, regardless of the outcome.

If one horse is offered at 4 to 6 and the other at 6 to 4, the bettor can guarantee a zero profit (and loss) by staking £60 at 4 to 6 and £40 at 6 to 4.  That way, a £100 return is guaranteed for a total stake of £100, regardless of the outcome.  Again, if the horse offered at 4 to 6 is actually a 4 to 7 chance, and bettors stake exclusively on this horse, their expected return is positive (although there is now a risk of losing the entire stake), and the expected return of the bookmaker is negative (though the actual return may be positive).

To summarize, the notional margin, as implied in the over-round, formally equates to the actual margin only if bettors stake proportionately more on the outcome offered at shorter odds.

 Creating an over-round

Take as an example the following odds offered about a binary proposition to players, where the odds-maker believes that the objective probability of X winning is 1 in 5 (0.2) and of Y winning is 4 in 5 (0.8).

Assuming an over-round of 100% (i.e. margin of zero), the odds-setter (taken here to be a bookmaker) would set the following odds:

Odds about X = 5.0 (4 to 1): Odds about Y = 1.25 (1 to 4).

Assume now that the odds-maker wishes to create an over-round of 108%.

In each case the odds offered should be cut, by 8 per cent in each case. So 8% of 5.0 = 0.4. Deducting 0.4 from 5.0 gives 4.6. 8% of 1.25 = 0.1. Deducting 0.1 from 1.25 gives 1.15.

So in the particular example, the odds offered would be as follows:

Odds about X = 4.6; Odds about Y = 1.15.

Assuming an equal amount bet (say £1,000) bet on both sides of the proposition (i.e. a total of £2,000, consisting of perhaps 200 people betting £10 each), the profit (loss) to the bookmaker would vary depending on the outcome.

If horse X wins, the bookmaker will pay out:

4.6x £1,000 = £4,600

Total amount staked (on X and Y) = £2,000.

Net profit to bookmaker if horse X wins = £2,000 – £4,600 = – £2,600

So if horse X wins, bookmaker loses £2,600.

If horse Y wins, the bookmaker will pay out:

1.15 x £1,000 = £1,150

Total amount staked (on X and Y) = £2,000

Net profit to bookmaker if horse Y wins = £2,000 – £1,150 = £850

Expected value of profit = expected value of profit from X + expected value of profit from Y = (-£2,600) x 0.2 + (£850) x 0.8 = -£520 + £680 = £160.

This is assuming that the implied probabilities in the odds are the correct probabilities, i.e. odds of 4/1 = probability of 1/5 (0.2); odds of 1/4 = probability of 4/5 (0.8).

Note also that £160 = 8% of total stake on X and Y (£2,000).

This all assumes, as observed, that the objective probabilities are correctly observed and that the amount staked on both sides of the proposition are equal.

Even if we assume that the objective probabilities are correctly observed then there is still substantial volatility of outcome (i.e. risk) for the bookmaker. If the objective probability is incorrectly observed, however, the outcome for the bookmaker may be worse, i.e. a systematic loss.

For example, assume the probability of horse X winning is actually 25%; assume probability of horse Y winning is 75%.

At the given odds levels, and assuming equal stakes across both propositions, we derive the following.

As above, if horse X wins, the bookmaker will pay out, as before:

4.6 x £1,000 = £4,600

Total amount staked (on X and Y) = £2,000.

Net profit to bookmaker if horse X wins = £2,000 – £4,600 = – £2,600

So if horse X wins, bookmaker loses £2,600.

If horse Y wins, the bookmaker will pay out, as before:

1.15 x £1,000 = £1,150

Total amount staked (on X and Y) = £2,000

Net profit to bookmaker if horse Y wins = £2,000 – £1,150 = £850

Expected value of profit = expected value of profit from X + expected value of profit from Y = (-£2,600) x 0.25 + (£850) x 0.75 = -£650 + £637.50 = -£12.50, i.e. a loss of £12.50.

Insofar as the objective probability of horse X winning is greater than 20%, the expected profit to the bookmaker will decline. At 24.65%, the profit (rounded to the nearest pound) can be shown to be equal to zero, and above that to turn negative.

Assume objective probability of horse X winning = 0.2465; objective probability of horse Y winning = 0.753.

Then, expected value of profit = expected value of profit from X + expected value of profit from Y = (-£2,600) x 0.2465 + (£850) x 0.7535 = -£640 + £640 = 0

To the extent that the objective probabilities are inaccurately estimated, therefore there is significant potential from the bookmaker’s point of view for a negative expected (as well as actual) profit.

Using the probabilities from the original example, the staking pattern from the bettor’s point of view that will lead to a unique expected loss (8% in this case) across both betting propositions is to bet more on the favourite and less on the longshot, in this case £1,600 and £400 respectively.

This leads to the following outcomes:

Profit to a £400 bet on horse X (if it wins) at 4.60 = £1,840

Profit to a £1,600 on horse Y (if it wins) at 1.15 = £1,840

Guaranteed profit by staking these sums on each horse from the bettor’s point of view = – £160, i.e. a net loss of 8% of total stake.

Insofar as bettors can be induced to bet in these proportions, the operator is guaranteed a profit regardless of the outcome. If the average bet size is the same for bets made on either side, then we need four times as many bettors on the favourite as the longshot to achieve this. Otherwise, the same outcome can be achieved if those who are backing the favourite bet four times as much in total as those backing the longshot.

Another way to manage risk in the face of unbalanced staking patterns is to move the odds so as to limit the maximum loss.

In order to reduce the maximum downside (i.e. when X wins) the bookmaker may move the odds in such a way as to attract money on one horse and away from the other horse. To do this, the odds about one horse may be lengthened and those about the other horse shortened before a negative downside is occurred to ether outcome. While such a strategy may reduce the exposure of the operator, the price may be paid in reduced profits.

Ultimately, line management from the operator’s point of view is about balancing risk and return, while maintaining an edge in favour of the ‘house’. From the bettor’s point of view, it is about exploiting opportunities which might arise where one (or more) of the odds making up that over-round are mispriced in the bettor’s favour, a possibility which can arise even when the over-round favours the ‘house.’