Skip to content

The ‘God Particle’

Why are objects the size that they are? The answer is quite simple. It’s the size of the molecules and atoms that make them up.

But why is an atom the size that it is? That’s determined by the size of the orbits of the electrons around the atom, which in turn is determined by the mass of the electron. Smaller mass means smaller orbits, means smaller everything. But why is the electron the mass that it is? Indeed, why does it have any mass at all? In fact, theory dictates that for all the elementary particles that compose the atom to interact as they do, their masses should actually be zero.

So what’s happening? The neat solution proposed by Professor Peter Higgs would seem to have it all figured out. He suggests that there is a field which permeates space, which moving particles interact with, thus acquiring the appearance of mass. Imagine a weightless pea (if you can!) moving through treacle.

So does this field actually exist? Quantum theory tells us that fields are associated with particles (a thing called ‘wave-particle duality’) so there must be a particle complementary to the Higgs field. For example, the particle associated with the electromagnetic field is the photon. The particle is known as the Higgs boson, and a lot of time, money, energy and intellect is being applied to finding out whether this particle, and therefore whether the Higgs field, actually exists.

This is where the Large Hadron Collider comes in, built by the European Organization for Nuclear Research (CERN). It works by accelerating protons around a kind of 17-mile underground racetrack, in order to smash them together at astronomically high speeds, with the purpose of creating smaller bits of matter, one of which could be a Higgs boson. If created, it would exist for only a tiny fraction of time, but hopefully long enough to be detected.

Today it seems that the  elusive boson, that Pimpernel of particles, has indeed been landed!

Is President Obama’s Support for Gay Marriage a Political Ploy or a Moral Choice?

The declaration of support for gay marriage, recently articulated by President Obama,  gratified some, astonished others, and drew both great praise and great condemnation. Was this a President demonstrating the moral courage to do what he believed right, or was it a political ploy driven by electoral expediency? Nobody can look into the soul of the President to answer that question for sure, but we can at least appeal to evidence to assess the real and potential impacts of the arguably unexpected announcement.

The immediate effect of this new stance, as reflected in the opinion polling, was mixed. Daily tracking polls by Gallup (2,2000 registered voters) and Rasmussen (1,500 ‘likely’ voters) offered conflicting evidence, with Gallup noting a small shift to President Obama and Rasmussen a bigger immediate shift to Governor Romney. If we want to artificially split the tie, we have a CBS News/New York Times call-back poll of 562 registered voters first interviewed on April 13-17 which indicated a small swing to Romney over the preceding month. When asked whether the declaration of support would make respondents more or less likely to vote for the President, the response was marginally negative, though not enough to worry his campaign. More significantly, perhaps, is the finding in this poll that a majority of the voting public (including 70% of self-declared ‘Independent’ voters) attributed the motivation of the announcement to political calculation.

If the public is right in this view, one might suppose that the President’s new position would lead to an uptick in expectations of an Obama victory this November. Yet I discern no evidence of this in either the polling or the betting. The polling, as explained, is at best a mixed bag, while the professional money, whether gauged by the action on the betting / trading exchanges (sometimes known as prediction markets) like Betfair and Intrade, or with bookmakers,  moved noticeably, if nowhere near decisively, away from the Democrat. As of a week after the Obama announcement, the money flowing through Betfair still indicated about a 62 per cent probability of re-election for the President, similar to that given spread bookmaker, Sporting Index. Intrade had his chances down to 58 per cent. In each case, this is a few points down on the situation pre-announcement. To put this in context, Governor Romney was still given not a whole lot more than a 1 in 3 chance or so of being elected President, but these movements in what were previously pretty stable markets are certainly noteworthy.

So if the President’s declaration was borne out of political calculation, what was the calculation? For one explanation, we might look to that part of the equation which contains campaign funding. In particular, we might look at the relatively recent legalisation of SuperPAC money by the conservative majority on the US Supreme Court (in the ‘Citizens United’ decision), whereby individuals and corporations can channel unlimited contributions to groups working to promote particular political agendas, notably through negative advertising against opposition candidates.

The important point here is that the weight of SuperPAC money is expected to heavily favour Republican candidates in general and Governor Romney in particular. To this extent it is arguably no coincidence that the Supreme Court was divided 5-4 on ‘party’ lines on this issue. Either way, the decision does tend to force the hand of candidates to court money by appealing to the wealthiest members of a candidate’s political base, which sometimes entails drawing clearer and less nuanced distinctions than perhaps would otherwise be optimal political strategy. To this extent, there is an irony in that conservative legal opinion on campaign finance might have triggered the exact opposite of conservative lay opinion on the issue of gay marriage. Unless, that is, the consequence is the election of a candidate who backs the idea of a constitutional amendment to enshrine marriage as exclusively the preserve of a man and a woman. There is, of course, another explanation for the Obama declaration. Simply put, that there is still enough doubt about the electoral impact of the new stance on gay marriage to allow the President to engage in a bit of good old-fashioned leadership. Now that’s a radical thought!

With Santorum and Gingrich gone, who will Romney turn to?

The decision by Newton Leroy (‘Newt’) Gingrich to suspend his campaign, announced on 2 May, 2012, was a rational if belated response to the delegate and polling arithmetic. For suspension, read cancellation and we witnessed the end, in all but name, of the race for the Republican nomination. Step forward Mitt Romney, presumptive nominee of the Republican Party for President of the United States.

The departures of Santorum and Gingrich did not constitute the ideal scenario, of course, for an Obama campaign all too happy to see the Republican pack continue to spend huge sums tearing itself apart. So might we have expected Santorum’s and now Gingrich’s departure to cause something of a shift in the betting markets in favour of the former Governor of Massachusetts? Expect what you like, but in fact there was scarcely a ripple of interest. The inevitability of Romney as GOP nominee was already pretty much factored in, well before Santorum and Gingrich woke up to the inevitability of it all.

This is not to say that interest in the political markets was entirely unaffected. Instead attention turned to the former Governor’s next important decision. It is a decision which no Presidential nominee takes lightly, and which some have handled much better than others. It was the decision that probably won JFK the White House by delivering the South to the Democrats. It was also the decision which turned a little-known Governor of Alaska into at one point becoming the favourite in some books to stand a heartbeat away from the office of the Presidency of the United States. Mitt Romney will soon need to make this decision, to choose whom he wants on the ticket as his Vice-Presidential running mate.

The key question is whether he will follow the 2008 Republican precedent in selecting a potential game-changer or instead will he plump for an established, mainstream safe pair of hands? Well, if you’re heading for almost sure defeat, a safe but unexciting choice might mean losing by less. In Presidential politics, however, losing by one electoral vote produces the same outcome as losing by a hundred. Put another way, in a state of the world where you are likely to go down by three points, anything which shakes up the range of reasonable outcomes around this expectation is a bonus. When you are down, volatility is very much your friend. This was the probable rationale for the selection by John McCain of Sarah Palin.

So will Romney plough the same furrow as the man who beat him to the nomination in 2008? I tend to doubt it. The highly controversial 5-4 decision of the US Supreme Court to allow so-called “SuperPACs” to spend unlimited sums of money helping or hindering individual candidates is almost certainly to the advantage of the Republican party in general, and Governor Romney in particular. Money talks in a big way in US politics and the presumptive Republican nominee knows this very well. He has the confidence which the cushion of a lot of money to spend tends to give a candidate. He is also a very cautious politician by nature and will not want to risk losing an election which he believes he might win by choosing a maverick running mate who could so easily run out of control. I think that rules out surprise picks. Instead, Willard Mitt Romney is likely to choose a candidate who could potentially turn an important toss-up state into the probable column. Of all VP picks on the radar, that comes down to Senator Marco Rubio of Florida, Senator Rob Portman of Ohio and Virginia Governor Bob McDonnell. To this list, you might add Governor of Indiana Mitch Daniels, but if the election turns on Indiana, Mr. Romney has probably lost already.

The former Governor will also want a running mate who is a known quantity, who is likely to complement the man at the top of the ticket and is unlikely to rock the buttoned-up Romney campaigning style. This probably rules out Governor Chris Christie of New Jersey, but just about pulls in Congressman Paul Ryan of Wisconsin (though his is not really a swing state); Governor Daniels and Senator Portman also fulfil this category.

Now solve the simultaneous equation and we have a candidate who might just fit the bill.

Is the Ohio Senator just the ticket for Mr. Romney? Or are we all in for a Palin-type surprise?

We shall see.

Can Derren Brown Help You Win the Lottery?

Derren Brown, the illusionist, is no stranger to the use of the idea of the wisdom of crowds as part of his entertainment package. A few years ago, for example, he selected a group of people and asked them to estimate how many sweets were in a jar.

All conventional ‘wisdom of crowds’ stuff, albeit wrapped as part of a magical mystery tour. His relatively more recent venture into this world of apparent wisdom went down a rather singular avenue, however, as he explained how a group of 24 people could predict the winning Lottery numbers with uncanny accuracy.

The idea in essence was that each of the 24 would make a guess about the number on each ball and the average of each of these guesses would converge on the next set of winning numbers. It appeared to work – but that is the thing about illusionists; they are good at producing illusions.

I will not go into how he did generate the effect of predicting the lottery draw, because there is no point if you already know, and because it would spoil the fun if you don’t. What is sure, however, is that the musings of the crowd had nothing to do with it.

But why not? After all, if the crowd can accurately guess the weight of an ox or the number of jelly beans in a jar, why not the numbers on the lottery balls? The simple answer, of course, is because the lottery balls are drawn randomly. And the thing about random events is that they are unpredictable. This is at the heart of what economists term ‘weak form market efficiency’, i.e. that future movements in market prices cannot be predicted from past movements. In this sense, the series has no memory.

So what is likely to happen if you do get a group of friends around and ask each to choose six numbers for the next Lottery draw? If you take the average of these numbers, my best estimate is that you are likely to end up with a prediction for each ball that is about 30 or probably less. Why so? Partly this is because averaging a large number of selections is likely to produce a number somewhere nearer the mid-point of the set of numbers than the extremes, but also because birthdays are particularly popular numbers.

But if you do use popular numbers (birthdays and numbers which form a simple pattern on the ticket) and just happen to win, you’re likely to be sharing your winnings with a lot of other people who’ve chosen the same numbers as you. The better strategy is to populate your ticket with bigger numbers, and to avoid neat patterns.

This strategy won’t alter your chance of winning but it will increase how much you can expect to win if you do win. And that is no illusion!

Polls and Prediction Markets: A Flashback to January, 2008

These are the markets which accurately predicted the winner of every single state in the 2004 US Presidential election and the winner of every single contested state in the 2006 US Senate elections. These are the markets which said that Barack Obama would win the Iowa caucuses comfortably, as would Mike Huckabee. These are the self-same markets which forecast a handsome victory for John McCain in New Hampshire at the same time as the polls and pundits were declaring the race too close to call.

And yet the markets showed not a clue that Hillary Clinton would overcome the momentum of the charismatic Senator from Illinois, but instead declared the race for Obama with the same confidence as the media and the exit polls. Indeed, it took something of an avalanche of real results showing the former First Lady handily ahead before the baton of market favouritism changed hands.

So what happened?

The conventional wisdom is that the models used by the pollsters under-estimated the turnout of female voters by a significant factor. In the event, women, who make up 57 per cent of the New Hampshire electorate, went for Hillary by a margin of 12 clear points, in contrast to Iowa where she lost the female vote by five.

It was the tears that did it, came the familiar cry. Not so, in my opinion. The defining moment for me came in the final debate when the New York Senator was asked a question about a likeability problem. Her response – “Well, that hurts my feelings!” was funny, warm and engaging, only to be interrupted by a curt “You’re likeable enough” from Mr. Obama. In his defence, the apparently dismissive tone in which the words were delivered was probably unintentional. But the damage was done.

It was what those familiar with Hillary history call the “Rick Lazio moment”, when her Republican Senate opponent in the 2000 New York campaign marched across the stage at her during a debate and demanded she sign a pledge card he brandished in her face. Instantly Lazio turned off a good proportion of New York’s women voters, and a not insignificant number of the men.

It all goes to show that the markets are usually a much more accurate predictor of election outcomes than are the polls, but there are times when those trading the markets are a little too dependent on charting and interpreting the numbers. Sometimes voters are motivated by factors which cannot be reduced to raw numbers. Those who are wise to this when it occurs stand to make a lot of money. As time would tell!

How Today’s Sports Forecasters Stand on the Shoulders of a Medieval Giant!

When asked to list my all-time heroes, the name of William of Ockham (or Occam) is never far from my lips. Born in the late 13th century, in the Surrey village of Ockham, this Franciscan philosopher, theologian and political writer, is generally considered to be one of the major figures in medieval scholarship.

In this regard, he ranks alongside the likes of his fellow theologians Thomas Aquinas and John Duns Scotus in the pantheon of great pre-Renaissance thinkers. Despite the title he earned at Oxford University of Venerabilis Inceptor (‘Worthy Beginner’) it is therefore by his alternative title of Doctor Invincibilis (‘Unconquerable Doctor’) that he comes down to us. Of all his writings, and they are each worthy of separate study, it is for his principle of parsimony in explanation and theory-building that he is best known today. It is a principle that Fox Mulder refers to in an episode of the X-files and that Jodie Foster defers to in ‘Contact’. Indeed, in William Peter Blatty’s novel, Legion (on which ‘The Exorcist III’ is based), the lead character complains that he was not put on earth “to sell William of Occam door to door.” He needn’t have bothered. William of Ockham sells himself well enough without help, through the principle that is known as ‘Occam’s Razor.’

The Razor is perhaps most clearly defined in Encyclopedia Britannica’s Student edition, where it is taken as an admonishment to devise no more explanations than necessary for any given situation. Put another way, it advises that one should opt for explanations in terms of the fewest possible number of causes, factors or variables. The adults’ version of Encyclopedia Britannica puts it more elegantly, but perhaps less clearly, in these terms – ‘Pluritas non est ponenda sine necessitate’ (‘Plurality should not be posited without necessity’). As such, the principle can be interpreted as giving precedence to simplicity; of two competing theories, the simplest explanation of an entity is to be preferred. There are some higher truths, which may be known to us by experience or revelation, and which Ockham witnesses as necessary rather than contingent entities, to which we are not advised to apply the razor. This is a part of Ockham’s trenchant analysis which is often forgotten, but at least need not concern us when considering the theme of today’s article.

So how do modern-day sports forecasters stand on the shoulders of this medieval giant? The best explanation is perhaps by way of example, and for this we need to travel to the Hong Kong racetrack and to the professional gamblers who devise sophisticated forecasting models of the outcomes of the races run at the Sha Tin and Happy Valley tracks. The basic methodology is to identify each individual factor that could possibly predict the outcome. And what do you do then? How do you decide what to include and what not? For the answer I asked a man who has conservatively made tens of millions of dollars at the track from this very approach. As we enjoyed the view from his Sydney penthouse, he summed it up in a sentence. “I apply Occam’s Razor”, he said, “it really is as simple as that!”

Alice’s Adventures in the Wonderful Looking Glass World of Prediction Markets

When Alice journeyed through the looking glass, Lewis Carroll tells us, she came across a Queen who claimed to be “one hundred and one, five months and a day.” “I can’t believe that!” said Alice. “Can’t you?” the Queen said in a pitying tone. “Try again: draw a long breath and shut your eyes.” Alice laughed. “There’s no use trying”, she said: “one can’t believe impossible things.” “I daresay you haven’t had much practice,” said the Queen. “When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”

Well, there’s not space enough to consider six impossible things here, but let’s think of one, and it’s a big one. It’s the idea that market prices, be it the stock market or the Betfair market or any other person-to-person betting market, already incorporate and fully reflect all available information. And so you can’t beat the market, unless you get lucky. Economists call this idea the ‘efficient market hypothesis’ and such a world as ‘informationally efficient.’

Now there’s a problem with this ‘looking glass’ world because in it nobody has an incentive to gather information. Why? Because information acquisition is costly and would add nothing to what can be obtained by simply looking at market prices. But if nobody acquires costly information, no trading will take place. And if nobody trades, what drives the market prices to incorporate and reflect all available information, i.e. what keeps the markets efficient? It’s called the ‘information paradox’, first formalized in a paper published by Sanford Grossman and Joseph Stiglitz in 1980, called ‘On the Impossibility of Informationally Efficient Markets.’ The same applies if information is costless to obtain but there are trading costs. In the real world, of course, there are both information and trading costs, and we have a paradox in spades. But still we are told that the market is informationally efficient. Welcome to the looking glass world of modern financial economics!

So is there a solution to the paradox? Consider the case of a betting market about the age of the looking glass queen. Let’s say nobody knows but lots of people are making guesses, some better informed than others, and betting looking glass money on the basis of these guesses. The queen just hasn’t been telling. Now she has told Alice, and the market will be settled later in the day when she tells the same to the whole world, one in which impossible things really are believed. Well, Alice has a little bit of time to place her bets before the rest of the world get to know the truth and if she’s clever she’ll bet enough to drive the market to the conclusion that the queen really is one hundred and one, five months and a day. And when the queen announces this is so, everyone will believe her. Alice will be rich, in a looking glass kind of way, and the queen will be not a day older. And the market will be efficient once again! And I think to myself, What a Wonderful World!

Could Prediction Markets Have Helped Prevent 9/11?

The Japanese attack on Pearl Harbor in December, 1941, came as a surprise to the US intelligence community. The attack by al Qaeda on the World Trade Centre and on the Pentagon in September 2001, came as a similar surprise. But the information was out there. So why the surprise? Simply put, it was because the systems were not in place with which to put the jigsaw together in time. The 9/11 Commission Report stated the problem like this: “The biggest impediment to all-source analysis – to a greater likelihood of connecting the dots – is the human or systemic resistance to sharing information.” James Surowiecki, in his book, ‘The Wisdom of Crowds’, offers the following perspective on this failure: “What was missing in the intelligence community … was any real means of aggregating not just information but also judgements. In other words, there was no mechanism to tap into the collective wisdom of National Security nerds, CIA spooks, and FBI agents. There was decentralization but not aggregation …”

The question is whether the market can help achieve this. Some people within the US Department of Defence already thought so, and had been working on just such an idea for several months when al Qaeda struck. Indeed, in May 2001 the Defense Advanced Research Projects Agency (DARPA) had issued a call for proposals under the heading of ‘Electronics Market-Based Decision Support’ (later ‘Future Markets Applied to Prediction’ (FutureMAP). The remit prescribed for FutureMAP was to create market-based techniques for avoiding surprise and predicting future events. It was not long, however, before the media and key members of the political class, began to train their guns on the idea of such a market. After all, it isn’t difficult to portray what was devised as a policy analysis market as no more than a forum for eager traders to profit from death and destruction. The populist arguments won the day and DARPA was forced to cancel the project.

While most of the arguments against the market were specious, there was some genuine intellectual concern as to how effective it would be likely to be. In particular, Joseph Stiglitz, winner of the 2001 Nobel Prize for economics, argued in an article published in the Los Angeles Times on 31 July 2003 (‘Terrorism: There’s No Futures in It’), that the market would be too “thin” (i.e. there would be too little money traded in the market) for it to be a useful tool for predicting events meaningfully. His argument was based on work he had previously published showing that markets can never be perfectly efficient when information is costly to obtain. The cost of obtaining and processing this information is, by implication, likely to act as a significant disincentive particularly in the context of a thin market (and hence low rewards).

I am not so sure. Is it obviously the case that a properly constructed market, populated by suitably motivated (and perhaps screened) players can be viewed in this way? Well, the jury’s still out on this, and in the meantime so are the so-called ‘terrorism futures’. For how long, I wonder?

Links:

http://www.commondreams.org/views03/0731-08.htm

http://www.amazon.co.uk/Wisdom-Crowds-Many-Smarter-Than/dp/0349116059/ref=sr_1_1?ie=UTF8&qid=1321962753&sr=8-1

Making a Model that Can Make Millions

Is it possible to construct a mathematical model of the performance of a golfer, a boxer, a football team, a cricket team, a snooker player, a horse, a dog, or whatever, that would predict well enough to allow us to earn a systematic profit over time?

The first problem is that any sporting event is influenced by random factors, which statisticians call ‘noise.’ In any given situation, this noise can overwhelm the best model to generate an unexpected outcome. That’s the bad news. The good news is that these factors tend to balance out over time, so any properly devised forecasting model which takes into account those factors which are predictable has the potential to perform very well indeed. Such models are known in the trade as ‘fundamental’ handicapping strategies, because they are based on fundamental information about performance.

The question is whether such a system exists which can actually turn a profit, or indeed whether it has ever existed. Ask Bill Benter how he made his millions at the Hong Kong racetrack and you have your answer. Basically, he constructed a computer model designed to estimate current performance potential. This involved the investigation of variables and factors with potential predictive significance and the refining of these individually so as to maximize their predictive accuracy.

In doing so he employed state-of-the-art econometric forecasting techniques, which he was confident enough to summarize in a classic paper entitled ‘Computer based horse race handicapping and wagering systems: a report.’ The basic question he seeks to answer in that paper is whether it is possible to construct a forecasting model which can generate a systematic profit at the races. Ever the practitioner, he provides the answer by constructing just such a model.

His method is to identify each individual factor that could possibly predict the outcome of a race and then to whittle these down to the most reliable and effective. Once he had a model that worked on past data, he tested it ‘out-of-sample; i.e. on a large sample of further races.

Sometimes he found that a variable was useful in predicting race outcomes but he really couldn’t understand why that should be the case. In such circumstances, he decided the best policy was not to care. Faced with a choice between a profitable model that he couldn’t fully explain, and an unprofitable model that he understood perfectly, he chose the former. Ideally, of course, you would work with a model that both works and you can fully explain, but Benter’s bottom line is that if it works, it doesn’t need fixing.

So what is that bottom line? Well, Bill Benter is now a very rich man even by the standards of most very rich men, and he made it on the basis of a sophisticated forecasting model which overcame track takes of about 19 per cent. In a masterly piece of understatement he concluded in his paper that “… at least at some times at some tracks, a statistically derived fundamental handicapping model can achieve a significant positive expectation.” Significant indeed! Why not try it some time?

Should You Sack the Manager?

Prime Minister Harold Wilson used to say that there were few jobs more stressful and more precarious than that of a politician but that the post of football manager was certainly one of them. Well, opinions are divided about the benefits of sacking those who run the country, and it is difficult to devise a measure which all would agree on, but there is a great deal of published analysis available which allows us to judge the effect of a change of management in other fields.

A seminal study in this regard, published by Professors Lieberson and O’Connor, found little evidence of any link between changes of Chief Executive Officer (CEO) and subsequent movements in company performance indicators such as sales and profits.

Other studies have found, in contrast, that changes of top managers do tend to be followed by sharp improvements in performance, particularly in certain sectors, such as computer equipment manufacture. Some areas of activity do seem impervious to changes at the top, however, as witnessed by a study of the effect of changes of Methodist ministers on church attendance, membership and donations. There was no discernible effect.

There is also a well-established literature which considers the impact of management change on team performance in professional sport, and this can be sub-divided into three distinct theories. According to the “common sense” theory, when a team is under-performing the manager is replaced, and if a better manager takes over, performance should improve. In the “vicious circle” theory, poor performance tends to trigger managerial change, but the disruption caused tends to make things worse. The there is the “ritual scapegoating” theory, in which the appointment of a new manager makes no difference, on average, to team performance.

Rick Audas, Stephen Dobson and John Goddard disentangled the competing theories, for the case of English football, in an article published in the Journal of Economics and Business. A detailed examination of the results of their study reveals that on average it takes up to 16 matches for a team subject to a within-season change of manager to adapt to the usual changes of tactics and playing style which ensue, and even then the team’s win rate tends to revert only to where it was prior to the change. The transition period is, on average, simply a sink into which some of the points that would have been earned are emptied.

What these results appear to tell us, then, is that the rate at which managers are replaced in English football is not optimal. The turnover is simply too fast.

So in light of these findings, is it possible to offer a rational explanation for existing attitudes to management change? Well, there may just be such an explanation, and it lies in a thing called “variance”, which is the dispersion of results about the average.

The idea here is that a change of manager may not improve performance but it does shake things up a bit. This can only be good news, of course, for a team which is likely to go down anyway. The reason is that while a change of manager may on average mean even fewer points, the change does at least improve the small chance of pulling clear of the relegation zone. Seen like this, it can be likened to an all-or-nothing throw of the dice. That’s the rational side of the argument. The problem comes when those in charge of the team’s future grow just a little too fond of the dice.

Links:

http://www.jstor.org/pss/2392715

http://onlinelibrary.wiley.com/doi/10.1111/1468-0270.00039/abstract

http://thevideoanalyst.com/does-sacking-the-manager-improve-results