When Theresa May announced on April 18 that she would call a snap general election, most commentators viewed the precise outcome of the vote as little more than a formality. The Conservatives were sailing more than 20% ahead of the Labour party in a number of opinion polls, and most expected them to be swept back into power with a hefty majority.
Even after a campaign blighted by manifesto problems and two terrorist attacks, the Conservatives were by election day still comfortably ahead in most polls and in the betting markets. According to the spread betting markets, they were heading for an overall majority north of 70 seats, while a number of forecasting methodologies projected that Jeremy Corbyn’s Labour could end up with fewer than 210.
In particular, an analysis of the favourite in each of the seats traded on the Betfair market gave the Tories 366 seats and Labour 208. The Predictwise betting aggregation site gave the Conservatives an 81% chance of securing an overall majority of seats, in line with the large sums of money trading on the Betfair exchange.
The PredictIt prediction market, meanwhile, estimated just a 15% chance that the Tories would secure 329 or fewer seats in the House of Commons (with 326 technically required for a majority), while the Oddschecker odds comparison site rated a “hung parliament” result an 11/2 chance (an implied probability of 15.4%). Only the Almanis crowd forecasting platform expressed any real doubt, putting the chance of a Conservative overall majority at a relatively paltry 62%.
In reality, the Conservative party lost more than a dozen seats net, ending up with 318 – eight short of a majority. Labour secured 262 seats, the Scottish National party 35, and the Liberal Democrats 12. Their projected vote shares are 42.4%, 40%, 3% and 7.9% respectively.
So did the opinion polls do any better than the betting markets? With the odd exception, no.
In their final published polls, ICM put the Tories on 46%, up 12% on Labour. ComRes predicted the Tories would score 44% with a 10-point lead. BMG Research was even further out, putting the Conservatives on 46% and a full 13% clear of Labour. YouGov put the Tories seven points clear of Labour (though their constituency-level model did a lot better), as did Opinium; Ipsos MORI and Panelbase had them eight points clear on 44%.
Other polls were at least in the ballpark. Kantar Public put the Tories 5% ahead of Labour, and SurveyMonkey (for the Sun) called the gap at 4%. Survation, the firm closest to the final result in their unpublished 2015 poll, this time put the Conservatives on 42% and Labour on 40%, very close to the actual result. Qriously (for Wired)was the only pollster to put Labour ahead, by three points.
According to the 2017 UK Parliamentary Election Forecast polling model, the Conservatives were heading for 366 seats, Labour 207, and the Liberal Democrats seven. Allowing for statistical uncertainty, the projection was of an “almost certain” overall majority for the Conservatives. The probability of a hung parliament was put at just 3%. All misses – though that doesn’t necessarily reflect on the model, which after all can only be as good as the polls fed into it.
Many others were wrong, too. The 2017 General Election Combined Forecast, which aggregates betting markets and polling models, forecast a Conservative majority of 66 seats. Other “expert” forecasts came from Britain Elects (Tories 356 seats, Labour 219 seats), Ashcroft (363, 217), Electoral Calculus (358, 218), Matt Singh (374, 207), Nigel Marriott (375, 202), Election Data (387, 186), Michael Thrasher (349, 215), Iain Dale (392, 163) and Andreas Murr and his colleagues (361, 236).
So what went wrong?
In the wake of the 2015 election, the Brexit referendum and Donald Trump’s victory, forecasters are getting used to fielding that question. But the answer isn’t that difficult: the problem is in quantifying the key factor in the common forecasting meltdown in advance. That factor is turnout, and notably relative turnout by different demographics.
In the Brexit referendum and 2016 US presidential election, turnout by poorer and less educated voters, especially outside urban areas, hit unprecedentedly high levels, as people who had never voted before (and may never vote again) came out in droves. In both cases, forecasters’ pre-vote turnout models had predicted that these voters wouldn’t show up in nearly the numbers they did.
In the 2017 election, it was turnout among the young in particular that rocketed. This time the factor was widely expected to matter, and indeed get-out-the-vote campaigns aimed at the young were based on it. But most polling models failed to properly account for it, and that meant their predictions were wrong.
Polling is a moving target, and the spoils go to those who are most adept at taking and changing aim. So will the lesson be learned for next time? Possibly. But next time, under-25s might not turn out in anything like the same numbers – or a different demographic altogether might surprise everyone. We might not have long to wait to find out.
If there is a set of ‘game’ strategies with the property that no ‘player’ can benefit by changing their strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute what is known as the ‘Nash equilibrium’.
This leads us to the classic ‘Prisoner’s Dilemma’ problem. In this scenario, two prisoners, linked to the same crime, are offered a discount on their prison terms for confessing if the other prisoner continues to deny it, in which case the other prisoner will receive a much stiffer sentence. However, they will both be better off if both deny the crime than if both confess to it. The problem each faces is that they can’t communicate and strike an enforceable deal. The box diagram below shows an example of the Prisoner’s Dilemma in action.
Prisoner 2 Confesses | Prisoner 2 Denies | |
Prisoner 1 Confesses | 2 years each | Freedom for P1; 8 years for P2 |
Prisoner 1 Denies | 8 years for P1; Freedom for P2 | 1 year each |
The Nash Equilibrium is for both to confess, in which case they will both receive 2 years. But this is not the outcome they would have chosen if they could have agreed in advance to a mutually enforceable deal. In that case they would have chosen a scenario where both denied the crime and received 1 year each.
So a Nash equilibrium is a stable state that involves interacting participants in which none can gain by a change of strategy as long as the other participants remain unchanged. It is not necessarily the best outcome for the parties involved, but it is the outcome we would most likely predict.
The Prisoner’s Dilemma is a one-stage game, however. What happens in games with more than one round, where players can learn from the previous moves of the other players?
Take the case of a 2-round game. The payoff from the game will equal the sum of payoffs from both moves.
The game starts with two players, each of whom is given £100 to place into a pot. They can then secretly choose to honour the deal or to cheat on the deal, by means of giving an envelope to the host containing the card ‘Honour’ or ‘Cheat’. If they both choose to ‘Honour’ the deal, an additional £100 is added to the pot, yielding each an additional £50. So they end up with £150 each. But if one honours the deal and the other cheats on the deal, the ‘Cheat’ wins the original pot (£200) and the ‘Honour’ player loses all the money in that round. A third outcome is that both players choose to ‘Cheat’, in which case each keeps the original £100. So in this round, the dominant strategy for each player (assuming no further rounds) is to ‘Cheat’, as this yields a higher payoff if the opponent ‘Honours’ the deal (£200 instead of £150) and a higher payoff if the opponent ‘Cheats’ (£100 instead of zero). The negotiated, mutually enforceable outcome, on the other hand, would be to agree to both ‘Honour’ the deal and go away with £150.
But how does this change in a 2-round game.
Actually, it makes no difference. In this scenario, the next round is the final round, in which you may as well ‘Cheat’ as there are no future rounds to realise the benefit of any goodwill realised from honouring the deal. Your opponent knows this, so you can assume your opponent who wishes to maximise his total payoff, will be hostile on the second move. He will assume the same about you.
Since you will both ‘Cheat’ on the second and final move, why be friendly on the first move?
So the dominant strategy is to ‘Cheat’ on the first round.
What if there are three rounds? The same applies. You know that your opponent will ‘Cheat’ on the final round and therefore the penultimate round as well. So your dominant strategy is to ‘Cheat’ on the first round, the second round and the final round. The same goes for your opponent. And so on. In any finite, pre-determined number of rounds, the dominant strategy in any round is to ‘Cheat.’
But what if the game involves an indeterminate number of moves? Suppose that after each move, you roll two dice. If you get a double-six, the game ends. Any other combination of numbers, play another round. Keep playing until you get a double-six. Your score for the game is the sum of your payoffs.
This sort of game in fact mirrors many real-world situations. In real life, you often don’t know when the game will end.
What is the best strategy in repeated play? For the game outlined above, we shall denote ‘Honour the deal’ as a ‘Friendly’ move and ‘Cheat’ as a hostile move. But the notion of a Friendly or Hostile approach can adopt other guises in different games.
There are seven proposed strategies here.
- Always Friendly. Be friendly every time
- Always Hostile. Be hostile every time
- Retaliate. Be Friendly as long as your opponent is Friendly but if your opponent is ever Hostile, you be Hostile from that point on.
- Tit for tat. Be Friendly on the first move. Thereafter, do whatever your opponent did on the previous move.
- Random. On each move, toss a coin. If Heads, be Friendly. If tails, be Hostile.
- Alternate. Be Friendly on even-numbered moves, and Hostile on odd-numbered moves, or vice-versa.
- Fraction. Be Friendly on the first move. Thereafter, be Friendly if the fraction of times your opponent has been Friendly until that point is less than a half. Be Hostile if it is less than or equal to a half.
Which of these is the dominant strategy in this game of iterated play? Actually, there is no dominant strategy in an iterated game, but which strategy actually wins if every strategy plays every other strategy.
‘Always Hostile’ does best against ‘Always Friendly’ because every time you are Friendly against an ‘Always Hostile’, you are punished with the ‘sucker’ payoff.
‘Always Friendly’ does best against Retaliation, because the extra payoff you get from a Hostile move is eventually negated by the Retaliation.
Thus even the choice of whether to be Friendly or Hostile on the first move depends on the opponent’s strategy.
For every two distinct strategies, A and B, there is a strategy C against which A does better than B, and a strategy D against which B does better than A.
So which strategy wins when every strategy plays every other strategy in a tournament? This has been computer simulated many times. And the winner is Tit for Tat.
It’s true that Tit for Tat can never get a higher score than a particular opponent, but it wins tournaments where each strategy plays every other strategy. In particular, it does well against Friendly strategies, while it is not exploited by Hostile strategies. So you can trust Tit for Tat. It won’t take advantage of another strategy. Tit for Tat and its opponents both do best when both are Friendly. Look at this way. There are two reasons for a player to be unilaterally hostile, i.e. to take advantage of an opponent or to avoid being taken advantage of by an opponent. Tit for Tat eliminates the reasons for being Hostile.
What accounts for Tit for Tat’s success, therefore, is its combination of being nice, retaliatory, forgiving and clear.
In other words, success in an evolutionary ‘game’ is correlated with the following characteristics:
Be willing to be nice: cooperate, never be the first to defect.
Don’t be played for a sucker: return defection for defection, cooperation for cooperation.
Don’t be envious: focus on how well you are doing, as opposed to ensuring you are doing better than everyone else.
Be forgiving if someone is willing to change their ways and co-operate with you. Don’t bear grudges for old actions.
Don’t be too clever or too tricky. Clarity is essential for others to cooperate with you.
As Robert Axelrod, who pioneered this area of game theory in his book, ‘The Evolution of Cooperation’: Tit for Tat’s “niceness prevents it from getting into unnecessary trouble. Its retaliation discourages the other side from persisting whenever defection is tried. Its forgiveness helps restore mutual cooperation. And its clarity makes it intelligible to the other player, thereby eliciting long-term cooperation.”
How about the bigger picture? Can Tit for Tat perhaps teach us a lesson in how to play the game of life? Yes, in my view it probably can.
Further Reading and Links
Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books
Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group
Axelrod, R. and Hamilton, W.D. (1981), The Evolution of Cooperation, Science, 211, 1390-96. http://www-personal.umich.edu/~axe/research/Axelrod%20and%20Hamilton%20EC%201981.pdf
The El Clasico game between Real Madrid and Barcelona is in the 23^{rd} minute at the Santiago Bernabeu when Lionel Messi is brought down in the penalty box and rewarded with a spot kick against the custodian of the Los Blancos net, Keylor Navas.
Messi knows from the team statistician that if he aims straight and the goalkeeper stands still, his chance of scoring is just 30%. But if he aims straight and Navas dives to one corner, his chance of converting the penalty rises to 90%.
On the other hand, if Messi aims at a corner and the goalkeeper stands still, his chance of scoring is a solid 80%, while it falls to 50% if the goalkeeper dives to a corner.
We are here simplifying the choices to two distinct options, for the sake of simplicity and clarity.
Navas also knows from his team statistician that if he dives to one corner and Messi aims straight, his chance of saving is just 10%. But if he stands still and Messi aims at one corner, his chance of saving the penalty rises to 50%.
On the other hand, if Navas stands still and Messi aims at a corner, his chance of making the save is just 20%, while it rises to 70% if Messi aims straight.
So this is the payoff matrix, so to speak, facing Messi as he weighs up his decision.
Goalkeeper – Stands Still | Goalkeeper – dive to one corner | |
Lionel Messi – Aims straight | 30% | 90% |
Lionel Messi – Aims at corner | 80% | 50% |
So what should he do? Aim straight or to a corner. And what should Navas do? Stand still or dive?
Here is the payoff matrix facing Navas.
Messi – Aims straight | Messi – Aims at a corner | |
Navas – Stands still | 70% | 20% |
Navas – Dives to one corner | 10% | 50% |
Game theory can help here.
Neither player has what is called a dominant strategy in game-theoretic terms, i.e. a strategy that is better than the other, no matter what the opponent does. The optimal strategy will depend on what the opponent’s strategy is.
In such a situation, game theory indicates that both players should mix their strategies, in Messi’s case aiming for the corner with a two-thirds chance, while the goalkeeper should dive with a 5/9 chance.
These figures are derived by finding the ratio where the chance of scoring (or saving) is the same, whichever of the two tactics the other player uses.
The Proof
Suppose the goalkeeper opts to stand still, then Messi’s chance (if he aims for the corner 2/3 of the time) = 1/3 x 30% + 2/3 x 80% = 10% + 53.3% = 63.3%
If the goalkeeper opts to dive, Messi’s chance = 1/3 x 90% + 2/3 x 50% = 30% + 33.3% = 63.3%
Adopting this mixed strategy (aim for the corner 2/3 of the time and shoot straight 1/3 of the time), the chance of scoring is therefore the same. This is the ideal mixed strategy, according to standard game theory.
From the point of view of Navas, on the other hand, if Messi aims straight, his chance of saving the penalty kick (if he dives 5/9 of the time) = 5/9 x 10% + 4/9 x 70% = 5.6% + 31.1% = 36.7%
If Messi opts to aim for the corner, Navas’ chance = 5/9 x 50% + 4/9 x 20% = 27.8% + 8.9% = 36.7%
Adopting this mixed strategy (dive for the corner 5/9 of the time and stand still 4/9 of the time), the chance of scoring is therefore the same. This is the ideal mixed strategy, according to standard game theory.
The chances of Messi scoring and Navas making the save in each case add up to 100%, which cross-checks the calculations.
Of course, if the striker or the goalkeeper gives away real new information about what he will do, then each of them can adjust tactics and increase their chance of scoring or saving.
To properly operationalise a mixed strategy requires one extra element, and that is the ability to truly randomise the choices, so that Messi actually does have exactly a 2/3 chance of aiming for the corner, and Navas actually does have a 5/9 chance of diving for the corner. There are different ways of achieving this. One method of achieving a 2/3 ratio is to roll a die and go for the corner if it comes up 1, 2, 3 or 4, and aim straight if it comes up 5 or 6. Or perhaps not! But you get the idea.
For the record, Messi aimed at the left corner, Navas guessed correctly and got an outstretched hand to it, pushing it back into play. Leo stepped forward deftly to score the rebound. Cristiano Ronaldo equalised from the spot eight minutes later. And that’s how it ended at the Bernabeu. Real Madrid 1 Barcelona 1. Honours even in El Clasico.
Appendix
Messi’s strategy
x = chance that Messi should aim at corner
y = chance that Messi should aim straight
So,
80x + 30y (if Navas stands still) = 50x + 90y (if Navas dives)
x + y = 1
So,
30x = 60y
30x = 60 (1-x)
90x = 60
x = 2/3
y=1/3
Navas’ strategy
x = chance that Navas should dive to corner
y = chance that Navas should stand still
So,
10x + 70y (if Messi aims straight) = 50x + 20y (if Messi aims at corner)
x+y = 1
So,
10x + 70y = 50x + 20y
40x = 50y
40x = 50(1-x)
90x = 50
x = 5/9
y = 4/9
“Few prediction schemes have been more accurate, and at the same time more perplexing, than the Super Bowl Stock Market Predictor, which asserts that the league affiliation of the Super Bowl winner predicts stock market direction. In this study, the authors examine the record and statistical significance of this anomaly and demonstrate that an investor would have clearly outperformed the market by reacting to Super Bowl game outcomes.” Thus read the abstract to a paper published in 1990 by Thomas Krueger and William Kennedy in the very well regarded Journal of Finance.
“If the Super Bowl is won by a team from the old National Football League (now the NFC, or National Football Conference),” they wrote, “then the stock market is very likely to finish the year higher than it began. On the other hand, if the game is won by a team from the old American Football League (now the AFC, or American Football Conference), the market will finish lower than it began.”
It is important to note, though, that some AFC teams count as NFL wins because they originated in the old NFL, i.e. Pittsburgh Steelers, Baltimore Ravens (formerly Cleveland Browns, Baltimore/Indianapolis Colts).
Over the 22-year history of the Super Bowl to the date of submission of their study in 1988, they documented a 91% accuracy rate for their predictor.
What happened in 1989? The NFC team, San Francisco 49ers, beat the AFC’s Cincinnati Bengals– the stock market rose 27%.
Further confirmation of an idea first proposed by New York Times sportswriter Leonard Koppett, published as ‘The Super Bowl Predictor’ by investment advisor Robert H. Stovall in the January 1988 issue of ‘Financial World.’
So what happened in 1990? Well, the NFC’s San Francisco 49ers won a second consecutive victory, beating the AFC’s Denver Broncos, by 55 points to 10. But the stock market fell in 1990, by 4.3%.
But then the Super Bowl Predictor returned to form, correctly predicting the direction of the stock market in 1991, 1992, 1993, 1994, 1995, 1996, 1997. Since the launch of the Super Bowl that made for 28 correct predictions out of 31 (a success rate of 90.3%).
Since then, the Super Bowl Predictor has had a much more chequered record. predicted correctly only about half the time since 1997. In 2009, Robert Stovell, a strategist for Wood Asset Management in Sarasota, Florida, and an early champion of the Stock Market Indicator wrote: “Nothing seems to be working anymore {in the stock market]”. Used to be, I was only happy when it was over 90% (accurate), and when it was still above 80% I was pleased. But certainly 79% is still far above a failing grade.” (quoted on January 12, 2009, in MarketBeat (WSJ.com’s ‘inside look at the markets’).
Since then, the Predictor has called it right five times (2010, 2011, 2012, 2014 and 2015, and wrong twice (2013 and 2016). As of January, 2017, the indicator been right a total of 40 times out of 50, as measured by the S&P 500 index. This year the AFC’s New England Patriots stormed from 25 points behind at one point in the game to beat the Atlanta Falcons by 34 points to 28 in overtime. That should presage a bad year for the stock markets in 2017. We shall see.
So is the Super Bowl Indicator a real forecasting tool, or is it simply descriptive of wat has happened rather than containing any predictive value?
You decide!
Further Reading
Krueger, T.M. and Kennedy, W.F. (1990), An Examination of the Super Bowl Stock Market Predictor, Journal of Finance, 1990, 45 (2), 691-697.
The Efficient Market Hypothesis (EMH), in its strictest form, holds that market prices or odds reflect all known information.
Prices or odds may change when new information is released, but this new information is unpredictable.
So the best estimate of the price or odds likely to prevail at any point in the future is the price now. This is dismal, if true, because it would mean that it is not possible to beat the market, except by chance. But this can’t be true, or else it creates a paradox. If the market was always efficient, traders would have no economic incentive to acquire information, since information acquisition and processing is not a costless activity and would add nothing to what can be obtained by simply looking at current market prices.
It has been mathematically proved (Grossman and Stiglitz, 1980, American Economic Review) that when information is not costless to obtain or process, asset prices can never fully reflect all the information available to traders. So in the real world, markets are not completely efficient. They cannot be. This result is a relief, at least in principle, to those seeking to beat the market.
The equilibrium proposed by Grossman and Stiglitz is one in which some profits are available to some investors.
Essentially, rational, ‘informed’ traders will seek to acquire and process new information whenever the benefits of doing so are greater then the costs. Up to the point, economists say, where the marginal costs equal the marginal benefits of obtaining and processing information.
But to the extent that trading is a zero-sum game, or worse, as most betting markets are, winners need losers.
So who are the winners and who are the losers?
To take the example of a poker game, good players need weak players in the game. In the financial literature these ‘weak players’ are known as ‘noise (or ‘uninformed’) traders.’ Noise makes trading in financial markets possible, and thus allows us to observe prices for financial assets. But noise also causes markets to be somewhat inefficient.
Imagine a world with no noise traders, no information costs, no trading costs – an efficient market, a market in which it would be irrational to place any trades, a market without traders, a strange kind of world. So the Efficient Market hypothesis in its strictest form cannot be true. So it is possible in principle to beat the market. How might we do this? By a ‘technical’ strategy which uses information contained in past and present prices or odds. Or by a ‘fundamental’ strategy which uses information about real variables, such as form. Or by some combination of these. Those practical matters can be examined at another time. Here we are looking exclusively at whether markets are inefficient as a matter of principle, in a world of positive information costs, and we have concluded that the answer is Yes.
There is another systematic reason why markets might be inefficient, and that is the existence of asymmetric information. A notable case of this is called ‘adverse selection’, which refers to a situation in which the buyer or seller of a product knows something about the product quality or condition that the other party does not know, allowing them to have a better estimate of what the true cost of the product should be. This can lead to the breakdown of a market in which it exists. George Akerlof’s seminal article (‘The Market for Lemons’), published in 1970 in the Quarterly Journal of Economics, which examined the problem of adverse selection on the market for used cars has important implications for any market characterised by adverse selection.
Here is the problem. If Mr. Smith wants to SELL me his horse, do I really WANT to buy it? It’s a question as old as markets and horses have existed, but it was for many, many years, one of the unspoken questions of economics. So how do we solve this paradox? For most of the history of economics, the answer was quite simple. Simply assume perfect markets and perfect information, so the horse buyer would know everything about the horse, and so would the seller, and in those cases where the horse is worth more to the buyer than the seller, both can strike a mutually beneficial deal. There’s a term for this: ‘gains from trade’.
In the real world, the person selling the horse is likely to know rather more about it than the potential purchaser. This is called ‘asymmetric information’, and the buyer is facing what is called an ‘adverse selection’ problem, as he has adverse information relative to the seller. Akerlof had become intrigued by the way in which economists were limited by their assumption of well-functioning markets characterised by perfect information. For example, the conventional wisdom was that unemployment was simply caused by money wages adjusting too slowly to changes in the supply and demand for labour. This was the so-called ‘neo-classical synthesis’ and it assumed classic markets, albeit they could be a bit slow to work.
At the same time, economists had come to doubt that changes in the availability of capital and labour could in themselves explain economic growth. The role of education was called upon as a sort of magic bullet to explain why an economy grew as fast as it did. But how can we distinguish the impact on productivity of the education itself from the extent to which education simply helped grade people? The idea here is that more able people will tend on average to seek out more education. So how far does education contribute to growth, and how far is it simply a signal and a screen for employers? In the real world, of course, these signals could be useful because employers are like the horse buyers – they know less about the potential employees than the employees know about themselves, the classic adverse selection problem.
Akerlof turned to the used car market for the answer, not least because at the time a major factor in the business cycle was the big fluctuation in sales of new cars. Just like in the market for horses, the first thing a potential used car buyer is likely to ask is “Why should I WANT to buy that used car if he wants so much to SELL it to me”. The suspicion is that the car is what Americans call a ‘lemon’, a sub-standard pick of the crop. Owners of better quality used cars, called ‘plums’, are much less likely to want to sell.
Now let’s say that you’re willing to spend £10,000 on a plum but only £5,000 on a lemon. In such a case, the best price you’d be willing to pay is about £7,500, and only then if you thought there was an equal chance of a lemon and a plum. At this price, though, sellers of the plums will tend to back out, but sellers of the troublesome lemons will be very happy to accept your offer.
But as a buyer you know this, so will not be willing to pay £7,500 for what is very likely to be a lemon. The prices that will be offered in this scenario may well spiral down to £5,000 and only the worst used cars will be bought and sold. The bad lemons have effectively driven out the good plums, and buyers will start buying new cars instead of plums. Just as with horses, asymmetric and imperfect information in the used car market has the potential, therefore, to severely compromise its effective operation.
We can assume that the demand for used cars depends most strongly on two variables – the price of the car and the average quality of used cars traded. Both the supply of used cars and the average quality will depend upon the price. In equilibrium, the supply equals the demand for the given average quality. As the price declines, normally the quality ill also fall. And it’s quite possible that no cars will be traded at any price.
This same idea shows the problem of medical insurance. In a free market for medical insurance, people above a certain age, for example, will have great difficulty in buying medical insurance. So why doesn’t the price rise to match the risk? The answer is that as the price level rises the people who insure themselves will be those who are increasingly certain that they will need the insurance. In consequence, the average medical condition of insurance applicants deteriorates as the price level rises – such that no insurance sales [for these age groups] may take place at any price. This is strictly analogous to the car case, where the average quality of used cars supplied fell with a corresponding fall in the price level. The principle of ‘adverse selection’ is potentially present in all lines of insurance. Adverse selection can arise whenever those seeking insurance have freedom to buy or not to buy, to choose the insurance plan, and to continue or discontinue as a policy holder.
There are ways to counteract the effects of quality uncertainty, such as guarantees on consumer durables. Brand names perform a complementary function. Brand names not only indicate quality but also give the consumer a means of retaliation if the quality does not meet expectations. Chains – such as hotel chains or restaurant chains – are similar to brand names. Licensing practices also reduce quality uncertainty. And education and labour markets themselves have their own ‘brand names.’
So one of the big problems that confront markets is the fact that some of the participants often don’t know certain things that others in the market do know. This includes the market for most consumer durables, virtually all jobs markets, many financial markets, etc. In these cases, one of the roles of economics is to ask what system of incentives is most likely to address this problem of imperfect and asymmetric information. In economics, signalling is the idea that one party (termed the agent) credibly conveys some information about itself to another party (the principal). Signals should be distinguished from what have been called ‘indices’ ( a term coined by Robert Jervis in his 1968 PhD thesis). Indices are attributes over which one has no control. Think of these as generally unalterable attributes of something or someone. Signals are things that are visible and that are in part designed to communicate. In a sense, they are alterable attributes. So employees send a signal about their ability level to the employer by acquiring certain education credentials. The informational value of the credential comes from the fact that the employer assumes it is positively correlated with having greater ability.
Education credentials can be used as a signal to the firm, indicating a certain level of ability that the individual may possess; thereby narrowing the informational gap. In a seminal article on signalling, published in 1973 by Michael Spence, he proposes the key assumption that good-type employees pay less for one unit of education than bad-type employees. In Spence’s model it is optimal for the higher ability person to obtain the credential (the observable signal) but not for the lower ability individual. The premise for the model is that a person of high ability has a lower cost for obtaining a given level of education than does a person of lower ability. Cost can be in terms of tuition costs, or intangible costs, such as stress and time and effort in obtaining the qualification. Thus, if both individuals act rationally it is optimal for the higher ability person to obtain the qualification but not for the lower ability person so long as the employers respond to the signal correctly. This will result in the workers self-sorting into the two groups. For this to work, it must be excessively costly, or impossible, to project a false image. The basic argument follows from the intuition that a behaviour that costs nothing can be equally well taken by anyone and so provides no information. It follows that perceivers should focus on behaviour which is costly to undertake. Signalling is an action by a party with good information that is confined to situations of asymmetric information.
The concept of screening should be distinguished from signalling, the latter implying that the informed agent moves first. When there is asymmetric information in the market, screening can involve incentives that encourage the better informed to self-select or self-reveal.
Joseph Stiglitz pioneered the theory of screening, examining how a less informed party can induce the other party to reveal their information. They can provide a menu of choices in such a way that the optimal choice of the other party depends on their private information. For example, a theme park might offer a menu of gold and silver tickets, where the more expensive gold ticket allows the customer to avoid the queue at rides. This will induce the customers to self-sort and reveal genuine information as to the value they place on their time and their desire to avoid the queues.
So can markets be efficient? In the strictest sense, the answer is No. But there are ways in which they can be made more efficient than they would be in their natural state.
Further Reading and Links
Akerlof, G. (1970), The Market for Lemons: Quality, Uncertainty and the Market Mechanism. Quarterly Journal of Economics. 84:488-500.
Grossman, S.J. and Stiglitz, J. (1980), The Impossibility of Informationally Efficient Markets, American Economic Review, June, 393-408.
Jervis, R. Signaling and Perception, in Kristen Monroe, ed., Political Psychology (Earlbaum, 2002).
Spence, M. (1973). “Job Market Signaling”. Quarterly Journal of Economics (The Quarterly Journal of Economics, 87 (3): 355–374.
Joseph E. Stiglitz, 1975. “The Theory of ‘Screening’, Education, and the Distribution of Income,” American Economic Review, 65(3), pp. 283–300.
Joseph E. Stiglitz, 1981. “Information and the Change in the Paradigm of Economics”, Nobel Prize Lecture, December 8.
A. Michael Spence, 1981. “Signaling in Retrospect and the Informational Structure of Markets”, Nobel Prize Lecture, December 8.
George A. Akerlof, “Behavioral Macroeconomics and Macroeconomic Behavior”, Nobel Prize Lecture, December 8.
The ‘over-round’
In a two-horse race, if both horses have an equal chance of winning (objectively), and both are offered at evens, then the expected profit of the market-maker (and of the bettor) is zero, ignoring operating, information and transactions costs.
In a two-horse race, if both are offered at evens (regardless of the respective probabilities of victory of the two horses), then it would require a stake of £x (split equally between the two horses) to be sure of being returned that £x (a net profit of zero) whichever horse wins. In this circumstance, the over-round of the bookmaker is said to be 100%, i.e. a notional profit margin of zero.
In practice, even if the notional profit margin is zero, the bookmaker is at a disadvantage if the horses are not equally matched, as a sophisticated bettor can take advantage by staking more than half on the horse with the greater chance of winning.
More generally, the over-round does not yield an accurate indicator of the bookmaker’s profit margin if bettors do not stake across all options in such a way as to ensure that their total stake of £x yields a certain return of £x, factored by the over-round.
For example, if the over-round is 120%, the notional margin to the bookmaker is 20%, and put simply bettors would have to stake £120 to ensure a return of £100. Say, for instance, that both horses in a 2-horse race are being offered at 4 to 6. Then the bettor would need to stake £60 on each (£120 in total) to be guaranteed a return of £100 (£40 plus the £60 stake returned) whichever horse won. In such circumstances, the bookmaker is guaranteed at 20% profit, regardless of the outcome.
If one horse is offered at 4 to 6 and the other at 6 to 4, the bettor can guarantee a zero profit (and loss) by staking £60 at 4 to 6 and £40 at 6 to 4. That way, a £100 return is guaranteed for a total stake of £100, regardless of the outcome. Again, if the horse offered at 4 to 6 is actually a 4 to 7 chance, and bettors stake exclusively on this horse, their expected return is positive (although there is now a risk of losing the entire stake), and the expected return of the bookmaker is negative (though the actual return may be positive).
To summarize, the notional margin, as implied in the over-round, formally equates to the actual margin only if bettors stake proportionately more on the outcome offered at shorter odds.
Creating an over-round
Take as an example the following odds offered about a binary proposition to players, where the odds-maker believes that the objective probability of X winning is 1 in 5 (0.2) and of Y winning is 4 in 5 (0.8).
Assuming an over-round of 100% (i.e. margin of zero), the odds-setter (taken here to be a bookmaker) would set the following odds:
Odds about X = 5.0 (4 to 1): Odds about Y = 1.25 (1 to 4).
Assume now that the odds-maker wishes to create an over-round of 108%.
In each case the odds offered should be cut, by 8 per cent in each case. So 8% of 5.0 = 0.4. Deducting 0.4 from 5.0 gives 4.6. 8% of 1.25 = 0.1. Deducting 0.1 from 1.25 gives 1.15.
So in the particular example, the odds offered would be as follows:
Odds about X = 4.6; Odds about Y = 1.15.
Assuming an equal amount bet (say £1,000) bet on both sides of the proposition (i.e. a total of £2,000, consisting of perhaps 200 people betting £10 each), the profit (loss) to the bookmaker would vary depending on the outcome.
If horse X wins, the bookmaker will pay out:
4.6x £1,000 = £4,600
Total amount staked (on X and Y) = £2,000.
Net profit to bookmaker if horse X wins = £2,000 – £4,600 = – £2,600
So if horse X wins, bookmaker loses £2,600.
If horse Y wins, the bookmaker will pay out:
1.15 x £1,000 = £1,150
Total amount staked (on X and Y) = £2,000
Net profit to bookmaker if horse Y wins = £2,000 – £1,150 = £850
Expected value of profit = expected value of profit from X + expected value of profit from Y = (-£2,600) x 0.2 + (£850) x 0.8 = -£520 + £680 = £160.
This is assuming that the implied probabilities in the odds are the correct probabilities, i.e. odds of 4/1 = probability of 1/5 (0.2); odds of 1/4 = probability of 4/5 (0.8).
Note also that £160 = 8% of total stake on X and Y (£2,000).
This all assumes, as observed, that the objective probabilities are correctly observed and that the amount staked on both sides of the proposition are equal.
Even if we assume that the objective probabilities are correctly observed then there is still substantial volatility of outcome (i.e. risk) for the bookmaker. If the objective probability is incorrectly observed, however, the outcome for the bookmaker may be worse, i.e. a systematic loss.
For example, assume the probability of horse X winning is actually 25%; assume probability of horse Y winning is 75%.
At the given odds levels, and assuming equal stakes across both propositions, we derive the following.
As above, if horse X wins, the bookmaker will pay out, as before:
4.6 x £1,000 = £4,600
Total amount staked (on X and Y) = £2,000.
Net profit to bookmaker if horse X wins = £2,000 – £4,600 = – £2,600
So if horse X wins, bookmaker loses £2,600.
If horse Y wins, the bookmaker will pay out, as before:
1.15 x £1,000 = £1,150
Total amount staked (on X and Y) = £2,000
Net profit to bookmaker if horse Y wins = £2,000 – £1,150 = £850
Expected value of profit = expected value of profit from X + expected value of profit from Y = (-£2,600) x 0.25 + (£850) x 0.75 = -£650 + £637.50 = -£12.50, i.e. a loss of £12.50.
Insofar as the objective probability of horse X winning is greater than 20%, the expected profit to the bookmaker will decline. At 24.65%, the profit (rounded to the nearest pound) can be shown to be equal to zero, and above that to turn negative.
Assume objective probability of horse X winning = 0.2465; objective probability of horse Y winning = 0.753.
Then, expected value of profit = expected value of profit from X + expected value of profit from Y = (-£2,600) x 0.2465 + (£850) x 0.7535 = -£640 + £640 = 0
To the extent that the objective probabilities are inaccurately estimated, therefore there is significant potential from the bookmaker’s point of view for a negative expected (as well as actual) profit.
Using the probabilities from the original example, the staking pattern from the bettor’s point of view that will lead to a unique expected loss (8% in this case) across both betting propositions is to bet more on the favourite and less on the longshot, in this case £1,600 and £400 respectively.
This leads to the following outcomes:
Profit to a £400 bet on horse X (if it wins) at 4.60 = £1,840
Profit to a £1,600 on horse Y (if it wins) at 1.15 = £1,840
Guaranteed profit by staking these sums on each horse from the bettor’s point of view = – £160, i.e. a net loss of 8% of total stake.
Insofar as bettors can be induced to bet in these proportions, the operator is guaranteed a profit regardless of the outcome. If the average bet size is the same for bets made on either side, then we need four times as many bettors on the favourite as the longshot to achieve this. Otherwise, the same outcome can be achieved if those who are backing the favourite bet four times as much in total as those backing the longshot.
Another way to manage risk in the face of unbalanced staking patterns is to move the odds so as to limit the maximum loss.
In order to reduce the maximum downside (i.e. when X wins) the bookmaker may move the odds in such a way as to attract money on one horse and away from the other horse. To do this, the odds about one horse may be lengthened and those about the other horse shortened before a negative downside is occurred to ether outcome. While such a strategy may reduce the exposure of the operator, the price may be paid in reduced profits.
Ultimately, line management from the operator’s point of view is about balancing risk and return, while maintaining an edge in favour of the ‘house’. From the bettor’s point of view, it is about exploiting opportunities which might arise where one (or more) of the odds making up that over-round are mispriced in the bettor’s favour, a possibility which can arise even when the over-round favours the ‘house.’