Skip to content

Could the methodology of a medieval scholar help us find a missing plane?

When asked to list my all-time heroes, the name of William of Ockham (or Occam) is never far from my lips. Born in the late 13th century, in the Surrey village of Ockham, this Franciscan philosopher, theologian and political writer, is generally considered to be one of the major figures in medieval scholarship.

In this regard, he ranks alongside the likes of his fellow theologians Thomas Aquinas and John Duns Scotus in the pantheon of great pre-Renaissance thinkers. Despite the title he earned at Oxford University of Venerabilis Inceptor (‘Worthy Beginner’) it is therefore by his alternative title of Doctor Invincibilis (‘Unconquerable Doctor’) that he comes down to us. Of all his writings, and they are each worthy of separate study, it is for his principle of parsimony in explanation and theory-building that he is best known today. It is a principle that Fox Mulder refers to in an episode of the X-files and that Jodie Foster defers to in ‘Contact’. Indeed, in William Peter Blatty’s novel, Legion (on which ‘The Exorcist III’ is based), the lead character complains that he was not put on earth “to sell William of Occam door to door.” He needn’t have bothered. William of Ockham sells himself well enough without help, through the principle that is known as ‘Occam’s Razor.’

The Razor is perhaps most clearly defined in Encyclopedia Britannica’s Student edition, where it is taken as an admonishment to devise no more explanations than necessary for any given situation. Put another way, it advises that one should opt for explanations in terms of the fewest possible number of causes, factors or variables. The adults’ version of Encyclopedia Britannica puts it more elegantly, but perhaps less clearly, in these terms – ‘Pluritas non est ponenda sine necessitate’ (‘Plurality should not be posited without necessity’). As such, the principle can be interpreted as giving precedence to simplicity; of two competing theories, the simplest explanation of an entity is to be preferred. There are some higher truths, which may be known to us by experience or revelation, and which Ockham witnesses as necessary rather than contingent entities, to which we are not advised to apply the razor. This is a part of Ockham’s trenchant analysis which is often forgotten, but at least need not concern us when considering the theme of today’s article.

So how do modern-day analysts stand on the shoulders of this medieval giant? The best explanation is perhaps by way of example, and for this we need to travel to the Hong Kong racetrack and to the professional gamblers who devise sophisticated forecasting models of the outcomes of the races run at the Sha Tin and Happy Valley tracks. The basic methodology is to identify each individual factor that could possibly predict the outcome. And what do you do then? How do you decide what to include and what not? For the answer I asked a man who has conservatively made tens of millions of dollars at the track from this very approach. As we enjoyed the view from his Sydney penthouse, he summed it up in a sentence. “I apply Occam’s Razor”, he said, “it really is as simple as that!”

Now say that you have a missing aircraft, and the number of explanations are seeming to grow by the minute. What should we do? Apply Occam’s Razor, of course. And see what we get.

 

 

 

 

Can prediction markets help find a missing plane?

I was once told a story about the value of crowd wisdom in turning up buried treasure. The story was that by asking a host of people, each with a little knowledge of ships, sailing and the sea, where a vessel is likely to have sunk in years gone by, it is possible with astonishing accuracy to pinpoint the wreck and the bounty within. Individually, each of those contributing a guess as to the location is limited to their special knowledge, whether of winds or tides or surf or sailors, but the idea is that together their combined wisdom (arrived at by averaging their guesses) could pinpoint the treasure more accurately than a range of other predictive tools. At least that’s the way it was told to me by an economist who was in turn told the story by a physicist friend. To any advocate of the power of prediction markets, this certainly sounds plausible, so I decided to investigate further. Soon I was getting acquainted with the fascinating tale of the submarine USS Scorpion, as related by Mark Rubinstein, Professor of Applied Investment Analysis at the University of California at Berkeley. In a fascinating paper titled, ‘Rational Markets? Yes or No? The Affirmative Case’, he tells of a story related in a book called ‘Blind Man’s Bluff: The Untold Story of American Submarine Espionage’ by Sherry Sontag and Christopher Drew. The book tells how on the afternoon of May 27, 1968, the submarine USS Scorpion was declared missing with all 99 men aboard. It was known that she must be lost at some point below the surface of the Atlantic Ocean within a circle 20 miles wide. This information was of some help, of course, but not enough to determine even five months later where she could actually be found. The Navy had all but given up hope of finding the submarine when John Craven, who was their top deep-water scientist, came up with a plan which pre-dated the explosion of interest in prediction markets by decades. He simply turned to a group of submarine and salvage experts and asked them to bet on the probabilities of what could have happened. Taking an average of their responses, he was able to identify the location of the missing vessel to within a furlong (220 yards) of its actual location. The sub was found. Sontag and Drew also relate the story of how the Navy located a live hydrogen bomb lost by the Air Force. Now this story has got me thinking. Let’s just say we are looking for a missing aircraft. Could prediction markets possibly help? It’s a question I’m currently asking myself. Maybe others should be as well!

How to Beat the Dealer at Blackjack!

It is said that on returning from a day at the races, a certain Lord Falmouth was asked by a friend how he had fared.  “I’m quits on the day”, came the triumphant reply.  “You mean by that,” asked the friend, “that you are glad when you are quits?”   When the said Lord replied that indeed he was, his companion suggested that there was a far easier way of breaking even, and without the trouble or annoyance. “By not betting at all!”  The noble lord said that he had never looked at it like that and, according to legend, gave up betting from that very moment.

While this may well serve as a very instructive tale for many, a certain Edward O. Thorpe, writing in 1962, took a rather different view. He had devised a strategy, based on probability theory, for consistently beating the house at Blackjack (or ‘21’). In his book, ‘Beat the Dealer: A Winning Strategy for the Game of Twenty – One’, Thorp presents the system. On the inside cover of the dust jacket he claims that “the player can gain and keep a decided advantage over the house by relying on the strategy”.

The basic rules of blackjack are simple. To win a round, the player has to draw cards to beat the dealer’s total and not exceed a total of 21.

Because players have choices to make, most obviously as to whether to take another card or not, there is an optimal strategy for playing the game. The precise strategy depends on the house rules, but generally speaking it pays, for example, to hit (take another card) when the total of your cards is 14 and the dealer’s face-up card is 7 or higher. If the dealer’s face-up card is a 6 or lower, on the other hand, you should stand (decline another card). This is known as ‘basic strategy.’

While basic strategy will reduce the house edge, it is generally not enough to turn the edge in the player’s favour. That requires exploitation of the additional factor inherent in the tradition that the used cards are put to one side and not shuffled back into the deck.

This means that by counting which cards have been removed from the deck, we can re-evaluate the probabilities of particular cards or card sizes being dealt moving forward. For example, a disproportionate number of high cards in the deck is good for the player, not least because in those situations where the rules dictate that the house is obliged to take a card, a plethora of remaining high cards increases the dealer’s probability of going bust (exceeding a total of 21).

Thorp’s genius was in devising a method of reducing this strategy to a few simple rules which could be understood, memorized and made operational by the average player in real time. As the book blurb puts it, “The presentation of the system lends itself readily to the rapid play normally encountered in the casinos.”

Since the publication of the book, the strategy has been amended and improved, but Ed Thorp’s original insights stand. The problem simply changed to one familiar to many successful horse players, i.e. how to get your money on before being closed down. That’s another story, for another day. As is the time I met up with the great man himself, and the time, years later, when I shared a pint with some of the legendary MIT blackjack team.

 

How an Ancient Greek Philosopher Bet on the Future – and Won!

It is for his idea that water is the essence of all matter that Thales of Miletus, the 6th century BC Greek philosopher, is best known. It is for his option trading, however, that he should be at least equally celebrated, as it is the first use of financial derivatives in recorded history.

Aristotle (in part XI of Book 1 of his ‘Politics’) relates the tale.

According to Aristotle’s account, Thales put a deposit during the winter on all the olive-presses in Chios and Miletus, which would allow him exclusive use of the presses after the harvest. Because the harvest was in the future, and nobody could be sure whether the harvest would be plentiful or not, he was able to secure the contracts for a very low price. In fact, we are informed that there was not one bid against him. From the olive press owners’ point of view, they were protecting themselves against a poor harvest by earning at least some money up front regardless of how things turned out.

Thales’ bet came off, big time. The harvest was excellent and there was heavy demand for the presses. Thales held the monopoly and was able to rent them out at a huge profit. Either he was an expert forecaster or he had calculated that a bad harvest would not lose him much in terms of lost deposits, whereas the upside of a good harvest was enormous. “Thus he showed the world that philosophers can easily be rich if they like, but that their ambition is of another sort”, wrote Aristotle.

In effect, Thales had exercised the first known options contract, more than 2,500 years ago. Today we would term it as buying a ‘call option’, i.e. an option to buy something at some designated price at some future date for a fixed fee (or ‘premium’). Put another way, it is an agreement that gives the purchaser the right (but not the obligation) to buy a commodity, stock, bond or other instrument at a specified price (the ‘strike price’) at the end of or within a specified time period. When the price exceeds the strike price, the option is said to be ‘in the money’.

Properly used, options can be an excellent vehicle for managing risk. In this example from 6th century BC Greece, the owners of the olive presses were ensuring that they didn’t lose their entire earnings in the event of a bad harvest. From Thales’ point of view, he was confident in his forecast of the harvest, but was still taking some risk that he’d lose all the deposits he’s paid. Today we’d say that he was risking not being able to exercise his call options.

I wonder how Thales and the olive-press owners might have used modern betting markets if they’d been available 2,500 years ago. I guess the owners might ‘sell’ a market about the size of the harvest. In this way, they would earn a greater return the worse the harvest. And this is what risk management is all about. Thales, on the other hand, would presumably have used his supreme confidence in his forecasting powers to ‘buy’ the market as well as the options and make himself an even richer man than he became.

No need to worry. Thales’ ambition, as Aristotle so aptly put it, was of another sort. Still, the money came in handy!

Can ‘Quarbs’ Help Forecast the Outcome of an Election?

Arbitrage is the practice of making a risk-free profit by taking advantage of a price differential between two or more markets. For example, if it is possible to BUY the number of corners in a soccer match at 10 with one odds-maker (so that a profit is obtained for every corner in excess of ten) and to SELL the number of corners at 11 with another odds-maker, a profit will accrue whatever the actual number of corner kicks taken. Such examples of free money are in short supply, however, and a more realistic trading approach may be to employ what I first proposed and termed in 2000 as a ‘quasi-arbitrage’ or ‘Quarb’ strategy. The assumption underpinning this Quarb strategy is straightforward. It is that the average market opinion (where there is a range of opinions) is a better indicator of the truth than the outlier (or ‘maverick’) opinion.  Take that number of corner kicks as an example. If there are four market-makers, say, three of which offer clients the chance to BUY at 10 and SELL at 11 and the fourth which allows you to BUY at 11 and SELL at 12, the average market price is the sum of the mid-points (10.5+10.5+10.5+11.5) divided by four, i.e. 10.75. If we take this to be the best estimate of the actual expected value of the outcome, a SELL at the ‘maverick’ price of 11 is a value bet. In other words, the ‘Quarb’ strategy advocates the aggregation and averaging of the range of forecasts, implied in the odds on offer from professional odds-makers, to identify the best possible forecast of the actual outcome.

PollyVote (www.pollyvote.com) is an election forecasting site which follows this principle of combining forecasts. By aggregating the vote-share snapshot contained in traditional opinion polls with a panel of American politics experts, a prediction market and a range of quantitative forecasting models, Polly provides a daily updated forecast of the share of the two-party vote that the Democratic and Republican Presidential candidates will obtain on polling day. For their polling input they use a variation of the RealClearPolitics average and for their prediction market they use the Iowa Electronic Markets, a research and teaching-oriented marketplace which allows small bets on a range of contracts including the Presidential vote-share. The panel is made up of respected political scientists and pundits and the quantitative models (based on an analysis of the likely impact of variables ranging from unemployment and inflation to incumbency and wars) draw on an array of different methodologies. Each of these forecasts is assigned an equal 25% weight and in 2004 generated an almost spot-on forecast of the actual respective vote shares.

Is the RealClearPolitics average the best baseline to use for aggregating the polls? Is the Iowa market the best choice of prediction market? Is the chosen panel and the way its views are aggregated the best way to assimilate the combined knowledge of the political intelligentsia? Is the range of forecasting models appropriate? Does the averaging of the four methodologies perform better than any one alternative forecasting methodology?

This debate will continue, and has been informed by how well Polly has performed over the past three elections in forecasting the vote shares of the candidates. The answer is very well indeed. This is a parrot which looks very much alive!

Election 2012: Was the Crowd Cleverer than the Expert?

@Leightonvw

Now that the dust has slowly started to settle on the 2012 U.S. election campaign season, we are left with a picture of what happened, who won and who lost. Aside from the politicians who came in on the wrong side of the vote, the biggest losers of this election cycle are the prognosticators of the right, who were not only almost universally wrong about the outcome, but in many cases wrong by several orders of magnitude. An independent ranking of the best and worst forecasters, based on the number of key states predicted correctly, awards the ‘broken forecast’ prize to the man who called Pennsylvania, Minnesota, Wisconsin, Iowa, New Hampshire, Ohio, Colorado, Virginia and Florida for Mr. Romney, wrong in each and every case. Step forward Fox News pundit, Dick Morris. Close runners up for most woeful political prescience include another Fox News pundit and writer for the Washington Examiner, Michael Barone, as well as Steve Forbes, of Forbes Magazine and George Will of the Washington Post.

Among the pollsters, my ‘getting it hopelessly wrong’ award goes jointly to Gallup and Rasmussen Reports, for a faintly ludicrous set of polls, skewed (presumably by accident) in favour of the Republicans, in the days and weeks running up to election day. This mirrors the huge skew to the Republicans recorded by Rasmussen in 2010, and further back, dating to his calling of the election for Mr. Bush by nine percent in 2000 (Bush lost the popular vote).

The ‘hopelessly wrong’ award for state voting goes to Susquehanna Polling of Pennsylvania, who got their polls woefully skewed (again presumably by accident) to the benefit of Mr. Romney, while the broad consensus of the out-of-state polls simultaneously got the result almost spot on. Another loser was RealClearPolitics, who managed to eliminate from consideration some of the most accurate pollsters while sticking with many of the worst.

On the plus side, it was a very good night for five sages who called the state tally perfectly, these being Nate Silver of FiveThirtyEight at the New York Times, Markos Moulitsas and Daily Kos Elections, Simon Jackman of The Huffington Post, Josh Putnam of Davidson College and Drew Linzer of Emory University. More broadly speaking, it was a great night for serious statistical analysis, for the ‘quants’ as they are sometimes labelled (including Sam Wang, of the Princeton Election Consortium).

The unsung hero of this election, however, is not a particular pundit, or statistician, or polling organization. It is not an expert at all. The unsung hero is the anonymous crowd. The reason is that while an expert may know more than anyone else in the room, he is unlikely to know more than the room as a whole, to be wiser or cleverer than the crowd. It is an idea picked up and developed in a fascinating recent paper by David Rothschild and Justin Wolfers, of the Wharton School, University of Pennsylvania, who argue that better forecasts can be made by asking people who they think will win than by asking their personal voting intention.

This is the idea that is driving the growing belief in the power of markets like the betting exchange, Betfair, to unveil the future. Despite the swings and roundabouts in the polls and among the pundits, these markets never wavered in their belief that the president would be re-elected. It all comes down to the idea that those who know the most will tend to bet the most, and accurate forecasts can thus be obtained by following the money. This is the basis of the science of ‘prediction markets’, which are essentially speculative markets created or employed for the purpose of making predictions.

In the weeks to come, data will be crunched to see exactly how well these markets did perform in Election 2012. It will be a while until we know the details, but already I am confident enough to make at least one prediction — they will have performed very well indeed.

Follow Leighton Vaughan Williams on Twitter: www.twitter.com/leightonvw

The Triumph of Election Science!

As the first election returns started to filter through on Tuesday evening, it soon became apparent that President Barack Obama was very likely to be re-elected to a second term in office. A comparison of early precinct returns with those registered in 2008 soon confirmed the evidence of the exit polls. Mitt Romney simply had far too few votes to seriously challenge the President where it mattered.

Up to this point, the balance of opinion among so-called political experts was pretty evenly divided between those thinking, like Fox News pundits Dick Morris and Michael Barone, that Romney would win by a landslide, and statistical analysts like Nate Silver, of FiveThirtyEight, Sam Wang of the Princeton Election Consortium and Drew Linzer of Votamatic, who believed the opposite.

Meanwhile, a glance at the betting/prediction markets showed what they had always shown — that the president would be re-elected by a handsome margin, and that nothing that Mr. Romney or his team could do was likely to prevent that happening. As the first voting stations opened, the bookmakers, the spread betting operators and the betting exchange, Betfair, all converged at a probability of re-election of about 80 percent. Intrade had it a bit lower, but still made President Obama the very clear favorite.

So what about the opinion polls? There was little consensus, and some of them, like so-called scientific pollsters Rasmussen and Gallup, had Mitt Romney up in their final published poll, as they had in most of their polls in the days and weeks before. More strangely, some commentators were apparently still taking these two operators seriously.

The national polling average on the RealClearPolitics website, which included Rasmussen and Gallup, also put Romney up for much of the lead-up to the election, although it showed a late shift to put Obama up by 0.7 percent on election day. RealClearPolitics publish a selection of the latest polls and take an average of these to come up with this overall figure.

But is this what the opinion polls were actually showing? The answer depends on whether you look at all the published polls, as broadly available on Pollster, or whether you exclude the polls you don’t favor, a methodology favored by RealClearPolitics. Not only do they not include a number of well respected polls in their overall average, they do not recognize them at all! Post-election analysis has in fact shown that the pollsters they exclude performed signficantly  better than those (like Rasmussen and Gallup) they choose for inclusion.

Taking an average of all the national polls, good and bad, had it in the President’s favor pretty much all along.

More important than the national picture are the state findings. An analysis of all the state polls showed that President Obama was rather more comfortably ahead, including in the key swing states of Ohio, Virginia, Wisconsin, Colorado, Iowa, Nevada and New Hampshire.

Combining the national and state polls, and awarding them a similar weighting, revealed Mr. Obama to be polling about 2.5 percent ahead of Mr. Romney — a bit more in the state polls and a bit less in the national polls.

Depending on the statistical analysis used to convert polling advantage into a probability of winning, the polling evidence on the day gave the President somewhere above a 90 percent chance of winning.

In this battle between the Fox News political ‘experts’ and the sophisticated polling analysts, the betting markets sided heavily with the analysts.  To be fair, the raw betting data indicated a slightly more conservative evaluation of Mr. Obama’s chances than did the models in the days leading up to the election. But this is for good reason. Why so?

First, the markets allow for the uncertainties a poll-based model can’t allow for, such as some late surprise or revelation. The point here is that potential volatility is the friend of the underdog, and while polls are snapshots of opinion, the betting markets are all about forecasting the eventual outcome.

Second, there is a well-established favorite-longshot bias which occurs in most betting markets, whereby the markets tend to slightly under-estimate the true likelihood of the favorite winning, and vice-versa. Prediction market analysts compensate for this to provide an even better forecast.

The markets have been successfully predicting elections since at least 1868, and since the advent of low-margin betting exchanges like Betfair have never been so sophisticated or so accurate. These exchanges differ from traditional betting markets by eliminating the odds-setting bookmaker, instead providing the technology to match up the best offers to back and lay an outcome on offer from all the clients of the exchange. They offer what is essentially a conversation in which the loudest voices, mediated by the focus of money, are generally the best informed.

There were big losers on election night, Rasmussen and Gallup polling, along with RealClearPolitics and Fox News  being among them.

Mr. Obama was, of course, a big winner on election night. But there were others. Statistical modelling was one such winner, and the other big winners were the prediction markets, which had it right all along.

Follow the money! It really does work every time.

Follow

Get every new post delivered to your Inbox.