Skip to content

Why were the UK election polls so spectacularly wrong?

May 9, 2015

If the opinion polls had proved accurate, we would have been woken up on the morning of May 8, 2015, to a House of Commons in which the Labour Party had quite a few more seats than the Conservatives, and by the end of the day the country would have had a new Prime Minister called Ed Miliband. This didn’t happen. Instead the Conservative Party was returned with almost 100 more seats than Labour and a narrow majority in the Commons. So what went wrong? Why did the polls get it so wrong?

This is not a new question. The polls were woefully inaccurate in the 1992 General Election, predicting a Labour victory, only for John Major’s Conservatives to win by a clear seven percentage points. While they had performed a bit better since, history repeated itself this year.

So what is the problem and can it be fixed? A big issue, I believe, is the methodology used. Pollsters simply do not make any effort to duplicate the real polling experience. Even as Election Day approaches, they very rarely identify to those whom they survey who the candidates are, instead simply prompting party labels. This tends to miss a lot of late tactical vote switching. Moreover, the filter they use to determine who will actually vote as opposed to say they will vote is clearly faulty, which can be seen if we compare the actual voter turnout figures with those projected in the polling numbers. Almost invariably, they over-estimate how many of those who say they will vote do actually vote. Finally, the raw polls do not make allowance for what we can learn from past experience as to what happens when people actually make the cross on the ballot paper compared to their stated intention. We know that there tends to be a late swing to the incumbents in the privacy of the polling booth. For this reason, it is wise to adjust the raw polls for this late swing.

Of all these factors, which was the main cause of the polling meltdown? For the answer, I think we need just look to the exit poll, which was conducted at polling stations with people who had actually voted. This exit poll, as in 2010, was quite accurate, while similar exit-style polls conducted during polling day over the telephone or online with those who declared they had voted or were going to vote failed pretty much as spectacularly as the other final polls. The explanation for this difference can, I believe, be traced to the significant difference in the number of those who declare they have voted or that they will vote and those who actually do vote. If this difference works particularly to the detriment of one party compared to another, then that party will under-perform in the actual vote tally relative to the voting intentions declared on the telephone or online. In this case, it seems a very reasonable hypothesis that rather more of those who declared they were voting Labour failed to actually turn up at the polling station than was the case with declared Conservatives. Add to that late tactical switching and the well-established late swing in the polling booth to incumbents and we have, I believe, a large part of the answer.

Interestingly, those who invested their own money in forecasting the outcome performed a lot better in predicting what would happen than did the pollsters. The betting markets had the Conservatives well ahead in the number of seats they would win right through the campaign and were unmoved in this belief throughout. Polls went up, polls went down, but the betting markets had made their mind up. The Tories, they were convinced, were going to win significantly more seats than Labour.

I have interrogated huge data sets of polls and betting markets over many, many elections stretching back years and this is part of a well-established pattern. Basically, when the polls tell you one thing, and the betting markets tell you another, follow the money. Even if the markets do not get it spot on every time, they will usually get it a lot closer than the polls.

So what can we learn going forward? If we want to predict the outcome of the next election, the first thing we need to do is to accept the weaknesses in the current methodologies of the pollsters, and seek to correct them, even if it proves a bit more costly. With a limited budget, it is better to produce fewer polls of higher quality than a plethora of polls of lower quality. Then adjust for known biases. Or else, just look at what the betting is saying. It’s been getting it right since 1868, before polls were even invented, and continues to do a pretty good job.


Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: