It shouldn’t be possible for us to exist. But we do. That’s the sort of puzzle I like exploring. So I will. Let’s start with the so-called ‘Cosmological Constant.’ This is an extra term added by Einstein in working out equations in general relativity that describe a non-expanding universe. It is the value of the energy density of the vacuum of space. The cosmological constant explains why a static universe doesn’t collapse in upon itself through the action of gravity. It’s true that the force of gravity is infinitesimally small compared to the electromagnetic force, but it has a lot more influence on the universe because all the positive and negative electrical charges in the universe somehow seem to balance out. Indeed, if there were just a 0.00001 per cent difference in the net positive and negative electrical charges within a body, it would be torn apart and cease to exist. This cosmological constant, therefore, is added to the laws of physics simply to balance the force of gravity contributed by all of the matter in the universe. What it represents is a sort of unobserved ‘energy’ in the vacuum of space which possesses density and pressure, which prevents a static universe from collapsing in upon itself. But we now know from observation that galaxies are actually moving away from us and that the universe is expanding. In fact, the Hubble Space Telescope observations in 1998 of very distant supernovae showed that the Universe is expanding more quickly now than in the past. So the expansion of the Universe has not been slowing due to gravity, but has been accelerating. We also know how much unobserved energy there is because we know how it affects the Universe’s expansion. But how much should there be? We can calculate this using quantum mechanics. The easiest way to picture this is to visualise ‘empty space’ as containing ‘virtual’ particles that continually form and then disappear. This ‘empty space’, it turns out, ‘weighs’ 10 to the power of 93 grams per cubic centimetre. Yet the actual figure differs from that predicted by a factor of 10 to the power of 120. The ‘vacuum energy density’ as predicted is simply 10120 times too big. That’s a 1 with 120 zeros after it. So there is something cancelling out all this energy, to make it 10 to the power of 120 smaller in practice than it should be in theory. In other words, the various components of vacuum energy are arranged so that they essentially cancel out.
Now this is very fortuitous. If the cancellation figure was one power of ten different, 10 to the power of 119, then galaxies could not form, as matter would not be able to condense, so no stars, no planets, no life. So we are faced with the mindboggling fact that the positive and negative contributions to the cosmological constant cancel to 120 digit accuracy, yet fail to cancel beginning at the 121st digit. In fact, the cosmological constant must be zero to within one part in roughly 10120 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have collapsed upon itself long ago. How likely is this by chance? Essentially, it is the equivalent of tossing a coin and needing to get heads 400 times in a row and achieving it. Go on. Do you feel lucky? Now, that’s just one constant that needs to be just right for galaxies and stars and planets and life to exist. There are quite a few, independent of this, which have to be equally just right, but this I think sets the stage. I’ve heard this called the fine-tuning argument.
There is also the symmetry/asymmetry paradox which has recently drawn increasing attention. When symmetry is required of the Universe, for example in a perfect balance of positive and negative charge, conservation of electric charge is critically ensured. If there were an equal number of protons and antiprotons, of matter and antimatter, produced by the Big Bang, they would have annihilated each other, leaving a Universe empty of its atomic building blocks. Fortuitously for the existence of a live Universe, protons actually outnumbered antiprotons by a factor of just one in one billion. If the perfect symmetry of the charge and almost vanishingly tiny asymmetry of matter and antimatter were reversed, if protons and antiprotons had not differed in number by that one part in a billion, there would be no galaxies, no stars, no planets, no life, no consciousness, no question for us to consider.
In summary, then, if the conditions in the Big Bang which started our Universe had been even a tiniest of a tiniest of a tiny bit different, with regard to a number of independent physical constants, our Universe would not have been able to exist, let alone allow living beings to exist within it. So why are they so right?
Let us first tackle those (I’ve met a few) who say that if they hadn’t been right we would not have been able to even ask the question. This sounds a clever point but in fact it is not. For example it would be absolutely bewildering how I could have survived a fall out of an aeroplane from 39,000 feet onto tarmac without a parachute, but it would still be a question very much in need of an answer. To say that I couldn’t have posed the question if I hadn’t survived the fall is no answer at all.
Others propose the argument that since there must be some initial conditions, these conditions which gave rise to the Universe and life within it possible were just as likely to prevail as any others, so there is no puzzle to be explained.
But this is like saying that there are two people, Jack and Jill, who are arguing over whether Jill can control whether a fair coin lands heads or tails. Jack challenges Jill to toss the coin 400 times. He says he will be convinced of Jill’s amazing skill if she can toss heads followed by tails 200 times in a row, and she proceeds to do so. Jack could now argue that a head was equally likely as a tail on every single toss of the coin, so this sequence of heads and tails was, in retrospect, just as likely as any other outcome. But clearly that would be a very poor explanation of the pattern that just occurred. That particular pattern was clearly not produced by coincidence. Yet it’s the same argument as saying that it is just as likely that the initial conditions were just right to produce the Universe and life to exist as that any of the other pattern of billions of initial conditions that would not have done so. There may be a reason for the pattern that was produced, but it needs a more profound explanation than proposing that it was just coincidence.
A second example. There is one lottery draw, devised by an alien civilisation. The lottery balls, numbered from 1 to 59, are to be drawn, and the only way that we will escape destruction, we are told, is if the first 59 balls out of the drum emerge as 1 to 59 in sequence. The numbers duly come out in that exact sequence. Now that outcome is no less likely than any other particular sequence, so if it came out that way a sceptic could claim that we were just lucky. That would clearly be nonsensical. A much more reasonable and sensible conclusion, of course, is that the aliens had rigged the draw to allow us to survive!
So the fact that the initial conditions are so fine-tuned deserves an explanation, and a very good one at that. It cannot be simply dismissed as a coincidence or a non-question.
An explanation that has been proposed that does deserve serious scrutiny is that there have been many Big Bangs, with many different initial conditions. Assuming that there were billions upon billions of these, eventually one will produce initial conditions that are right for the Universe to at least have a shot at existing.
In this apparently theory, we are essentially proposing a process statistically along the lines of aliens drawing lottery balls over and over again, countless times, until the numbers come out in the sequence 1 to 59.
On this basis, a viable Universe could arise out of re-generating the initial conditions at the Big Bang until one of the lottery numbers eventually comes up. Is this a simpler explanation of why our Universe and life exists than an explanation based on a primal cause, and in any case does simplicity matter as a criterion of truth? This is the first question and it is usually accepted in the realm of scientific enquiry. A simpler explanation of known facts is usually accepted as superior to a more complex one.
Of course, the simplest state of affairs would be a situation in which nothing had ever existed. This would also be the least arbitrary, and certainly the easiest to understand. Indeed, if nothing had ever existed, there would have been nothing to be explained. Most critically, it would solve the mystery of how things could exist without their existence having some cause. In particular, while it is not possible to propose a causal explanation of why the whole Universe or Universes exists, if nothing had ever existed, that state of affairs would not have needed to be caused. This is not helpful to us, though, as we know that in fact at least one Universe does exist.
Take the opposite extreme, where every possible Universe exists, underpinned by every possible set of initial conditions. In such a state of affairs, most of these might be subject to different fundamental laws, governed by different equations, composed of different elemental matter. There is no reason in principle, on this version of reality, to believe that each different type of Universe should not exist over and over again, up to an infinite number of times, so even our own type of Universe could exist billions of billions of times, or more, so that in the limit everything that could happen has happened and will happen, over and over again. This may be a true depiction of reality, but it or anything anywhere remotely near it, seems a very unconvincing one. In any case, our sole source of understanding about the make-up of a Universe is a study of our own Universe. On what basis, therefore, can we scientifically propose that the other speculative Universes are governed by totally different equations and fundamental physical laws? They may be, but that is a heroic assumption.
Perhaps the laws are the same, but the constants that determines the relative masses of the elementary particles, the relative strength of the physical forces, and many other fundamentals, differ but not the laws themselves. If so, what is the law governing how these constants vary from Universe to universe, and where do these fundamental laws come from? From nothing? It has been argued that absolutely no evidence exists that any other Universe exists but our own, and that the reason that these unseen Universes is proposed is simply to explain the otherwise baffling problem of explaining how our Universe and life within it can exist. That may well be so, but we can park that for now as it is still at least possible that they do exist.
So let’s step away from requiring any evidence, and move on to at least admitting the possibility that there are a lot of universes, but not every conceivable universe. One version of this is that the other Universes have the same fundamental laws, subject to the same fundamental equations, and composed of the same elemental matter as ours, but differ in the initial conditions and the constants. But this leaves us with the question as to why there should be only just so many universes, and no more. A hundred, a thousand, a hundred thousand, whatever number we choose requires an explanation of why just that number. This is again very puzzling. If we didn’t know better, our best ‘a priori’ guess is that there would be no universes, no life. We happen to know that’s wrong, so that leaves our Universe; or else a limitless number of universes where anything that could happen has or will, over and over again; or else a limited number of universes, which begs the question, why just that number?
Is it because certain special features have to obtain in the initial conditions before a Universe can be born, and that these are limited in number. Let us assume this is so. This only begs the question of why these limited features cannot occur more than a limited number of times. If they could, there is no reason to believe the number of universes containing these special features would be less than limitless in number. So, on this view, our Universe exists because it contains the special features which allow a Universe to exist. But if so, we are back with the problem arising in the conception of all possible worlds, but in this case it is only our own type of Universe (i.e. obeying the equations and laws that underpin this Universe) that could exist limitless times. Again, this may be a true depiction of reality, but it seems a very unconvincing one.
The alternative is to adopt an assumption that there is some limiting parameter to the whole process of creating Universes, along some version of string theory which claims that there are a limit of 10 to the power of 500 solutions (admittedly a dizzyingly big number) to the equations that make up the so-called ‘landscape’ of reality. That sort of limiting assumption, however realistic or unrealistic it might be, would seem to offer at least a lifeline to allow us to cling onto some semblance of common sense.
Before summarising where we have got to, a quick aside on the ‘Great Filter’ idea, relating to the question of how life of any form could arise out of ‘dead stuff’ (inanimate matter) let alone in a form which could, and then did, lead to the building blocks of an evolutionary process that created consciousness. Observable civilisations don’t seem to happen much from what we know now, and possibly only once. Indeed, even in a universe that manages to exist, the mind-numbingly small improbability of getting from ‘dead stuff’ to us seems to require a series of steps of apparently astonishing improbability. The Filter refers to the causal path from simple dead matter to a visible civilisation. The underpinning logic is that almost everything that starts along this path is blocked along the way, which might be by means of one extremely hard step, or many very, very hard steps. Indeed, it’s commonly supposed that it has only once ever happened here on earth. Just exactly once, traceable so far to LUCA (our Last Universal Common Ancestor). If so, it may be why the universe out there seems for the most part to be quite dead. The biggest filter, so the argument goes, is that the origin of life from inanimate matter is itself very very very hard. It’s a sort of Resurrection but an order of magnitude harder because the ‘dead stuff’ had never been alive, and nor had anything else! And that’s just the first giant leap along the way. This is a big problem of its own but that’s for another day, so let’s leave that aside and go back a step, to the origin of the universe. Before we do so, let us as I suggested before our short detour, summarise very quickly.
Here goes. If we didn’t know better, our best guess, the simplest description of all possible realities, is that nothing exists. But we do know better, because we are alive and conscious, and considering the question. But our Universe is far, far, far too fine-tuned, by a factor of billions of billions, to exist by chance if it is the only Universe. So there must be more, if our Universe is caused by the roll of the die, a lot more. But how many more? If there is some mechanism for generating experimental universe upon universe, why should there be a limit to this process, and if there is not, that means that there will be limitless universes, including limitless identical universes, in which in principle everything possible has happened, and will happen, over and over again.
Even if we accept there is some limiter, we have to ask what causes this limiter to exist, and even if we don’t accept there is a limiter, we still need to ask what governs the equations representing the initial conditions to be as they are, to create one Universe or many. What puts life into the equations and makes a universe or universes at all? And why should the mechanism generating life into these equations have infused them with the physical laws that allow the production of any universe at all?
Some have speculated that we can create a universe or universes out of nothing, that a particle and an anti-particle, for example could in theory spontaneously be generated out of what is described as a ‘quantum vacuum’. According to this theoretical conjecture, the Universe ‘tunnelled’ into existence out of nothing.
This would be a helpful handle for proposing some rational explanation of the origin of the Universe and of space-time if a ‘quantum vacuum’ was in fact nothingness. But that’s the problem with this theoretical foray into the quantum world. In fact, a quantum vacuum is not empty or nothing in any real sense at all. It has a complex mathematical structure, it is saturated with energy fields and virtual-particle activity. In other words, it is a thing with structure and things happening in it. As such, the equations that would form the quantum basis for generating particles, anti-particles, fluctuations, a Universe, actually exist, possess structure. They are not nothingness, not a void.
To be more specific, according to relativistic quantum field theories, particles can be understood as specific arrangements of quantum fields. So one particular arrangement could correspond to there being 28 particles, another 240, another to no particles at all, and another to an infinite number. The arrangement which corresponds to no particles is known as a ‘vacuum’ state. But these relativistic quantum field theoretical vacuum states are indeed particular arrangement of elementary physical stuff, no less than so than our planet or solar system. The only case in which there would be no physical stuff would be if the quantum fields ceased to exist. But that’s the thing. They do exist. There is no something from nothing. And this something, and the equations which infuse it, has somehow had the shape and form to give rise to protons, neutrons, planets, galaxies and us.
So the question is what gives life to this structure, because without that structure, no amount of ‘quantum fiddling’ can create anything. No amount of something can be produced out of nothing. Yes, even empty space is something with structure and potential. More basically, how and why should such a thing as a ‘quantum vacuum’ even have existed, begun to exist, let alone be infused with the potential to create a Universe and conscious life out of non-conscious somethingness? Nothing comes from nothing. Uncaused, unconscious, Universe-contingent complex quantum laws and fields can not act as the Prime Mover for themselves. Such a proposition is logically incoherent. And there is no infinite regress in which the number of past events in the history of the universe is infinite. Infinity is a concept, an idea, an abstraction. To propose that it exists in the universe in reality leads to mathematically provable self-contradiction. For example, what is infinity minus infinity? So let’s be clear. Whatever the reason, whatever existed ‘before’ (in a causative sense) the ‘Big Bang’ was something real, albeit immaterial in the conventional sense of the word, as well as timeless and spaceless. Put another way, we need to ask as a start where the laws of quantum mechanics come from, or why the particular kinds of fields that exist do so, or why there should be any fields at all? These fields, these laws did exist and do exist, but from whence were they breathed? Candidates for this, this something which is immaterial, timeless and spaceless, are abstractions, numbers, properties (such as blueness). But uncaused abstract objects, even assuming such an uncaused category actually exists, could not stand in causal relation to an effect such as the origin of the universe or anything else. Numbers, for example, can not cause anything. There is no possible world in which the number four, or the number six, or any other number or property (such as blueness) could cause or bring into effect anything, including the origin of the universe.
The other candidate for something immaterial, timeless and spaceless is disembodied mind, which could indeed stand in causal relation to an effect such as the origin of the universe, just as embodied mind (with which we are familiar) could. This sounds quite bizarre, of course, and faintly ridiculous. How can mind exist independent of body? Without getting into the philosophy of the mind-body problem, it is a question which would sound a lot stranger even a few years ago than it does now. Mind existing independently of body these days does not sound such a strange concept at all. It would in any case seem as a matter of logic to be one of these candidates and it is also, as a matter of reason, not the former. Hence it would seem, as a matter of logical coherence, if not immediately intuitively obvious, to be the latter, i.e. mind.
So there was something ‘before’ (causatively) the Big Bang, something which possessed structure, something which gave life to the equations which allowed a Universe to exist, and all life within it. And this something has produced a very fine tune, which has produced human mind and consciousness, and the ability to ask the ultimate question. To ask who or what produced a Universe in which we can pose this ultimate question, and to what end? In other words, who or what composed this very fine tune? And to ask the question, Why?
It’s a big question, I realise that, but it’s a very important question. And it’s a question to which we seem to have provided, by the application of logic and reasoning, an answer. For those still seeking some other answer, I hope the analysis and reasoning provided here has at the very least helped point the way.
Further Reading and Links
Derek Parfit, ‘Why anything? Why this? Part 1. London Review of Books, 20, 2, 22 January 1998, pp. 24-27.
https://www.lrb.co.uk/v20/n02/derek-parfit/why-anything-why-this
Derek Parfit, ‘Why anything? Why this? Part 2. London Review of Books, 20, 3, 5 February 1998, pp. 22-25.
https://www.lrb.co.uk/v20/n03/derek-parfit/why-anything-why-this
John Piippo, Giving Up on Derek Parfit, July 22, 2012
http://www.johnpiippo.com/2012/07/giving-up-on-derek-parfit.html
A universe made for me? Physics, fine-tuning and life https://cosmosmagazine.com/physics/a-universe-made-for-me-physics-fine-tuning-and-life
John Horgan, ‘Science will never explain why there’s something rather than nothing’, Scientific American, April 23, 2012.
David Bailey, What is the cosmological constant paradox, and what is its significance? 1 January 2017. http://www.sciencemeetsreligion.org/physics/cosmo-constant.php
Fine Tuning of the Universe
http://reasonandscience.heavenforum.org/t1277-fine-tuning-of-the-universe
The Great Filter – are we almost past it? http://mason.gmu.edu/~rhanson/greatfilter.html
http://www.overcomingbias.com/2017/12/dragon-debris.html
Last Common Universal Ancestor (LUCA)
https://en.wikipedia.org/wiki/Last_universal_common_ancestor
David Albert, ‘On the Origin of Everything’, Sunday Book Review, The New York Times, March 23, 2012.
Now that Jeb Bush has officially announced he is seeking the nomination of the Republican party as its candidate for president of the United States, it seems a good time to ask how likely it is that he will actually become the 45th US President. How can we best answer this? A big clue can be found in a famous study of the history of political betting markets in the US, which shows that of the US presidential elections between 1868 and 1940, in only one year, 1916, did the candidate favoured in the betting end up losing, when Woodrow Wilson came from behind to upset Republican Charles E. Hughes in a very contest. Even then, they were tied in the betting by the close of polling. The power of the betting markets to assimilate the collective knowledge and wisdom of those willing to back their judgement with money has only increased in recent years as the volume of money wagered has risen dramatically, the betting exchanges alone seeing tens of millions of pounds trading on a single election. In 2004, a leading betting exchange actually hit the jackpot when its market favourite won every single state in that year’s election. The power of the markets has been repeated in every presidential election since. For example, in 2008, the polls had shown both John McCain and Barack Obama leading at different times during the campaign, while the betting markets always had Obama as firm favourite. Indeed, on polling day, he was as short as 20 to 1 on to win with the betting exchanges, while some polling still had it well within the margin of error. In the event, Obama won by a clear 7.2%. By 365 Electoral College Votes to 173. In 2012, Barack Obama led Mitt Romney by just 0.7% in the national polling average on election day, with major pollsters Gallup and Rasmussen showing Romney ahead. British bookmakers were quoting the president at 5 to 1 on (£5 to win £1). Indeed, Forbes reflected the view of most informed observers, declaring that: With one day to go before the election, we’re becoming super-saturated with poll data predicting a squeaker in the race for president. Meanwhile, bookmakers and gamblers are increasingly certain Obama will hang on to the White House. He went on to win by 3.9% and by 332 Electoral College Votes to 206. What is happening here is that the market is tapping into the collective wisdom of myriad minds who feed in the best information and analysis they can because their own financial rewards depend directly upon this. As such, it is a case of “follow the money” because those who know the most, and are best able to process the available information, tend to bet the most. Moreover, the lower the transaction costs (in the UK the betting public do not pay tax on their bets) and information costs (in never more plentiful supply due to the Internet) the more efficient we might expect betting markets to become in translating information today into forecasts of tomorrow. For these reasons, modern betting markets are likely to provide better forecasts than they have done ever before. In this sense, the market is like one mind that combines the collective wisdom of everybody. So what does this brave new world of prediction markets tell us about the likely Republican nominee in 2016? Last time, they were telling us all along that it would be Mitt Romney. This time the high-tech crystal ball offered by the markets is seeing not one face, but three, and yes – Jeb Bush is one of them. But two other faces loom large alongside the latest incarnation of the Bush dynasty. One is Florida senator, Marco Rubio, and the other is the governor of Wisconsin, Scott Walker. According to the current odds, at least, it is very likely that one of these men will be the Republican nominee. According to the betting, Bush will struggle to win the influential Iowa caucus, which marks the start of the presidential election season. The arch conservative voters there are expected to go for the man from Wisconsin. New Hampshire, the first primary proper, is likely to be closer. Essentially, though, this will be a contest between the deep pockets and connections of the Bush machine, the deep appeal of Scott Walker tot the “severely conservative” (a phrase famously coined by Mitt Romney), and the appeal of Marco Rubio to those looking for solid conservative credentials matched with relative youth and charisma. By the time the race is run, the betting markets currently indicate that Bush is the name that is most likely (though by no means sure) to emerge. Rubio is likely to push him hardest – and it could be close. At current odds, though, Bush does have the best chance of all the candidates in the field of denying the Democrats, and presumably Hillary Clinton the White House. But whoever is nominated by the Republican Party, it is the Democrats who are still firm favourites to retain the keys to Washington DC’s most prestigious address. Note A version of this blog, with links to sources, first appeared in The Conversation UK on June 15, 2015. https://theconversation.com/jeb-bush-dives-into-the-presidential-race-ask-the-betting-markets-how-hell-do-43300
Twitter: @leightonvw
Why did the Conservatives win an overall (albeit narrow) majority in the 2015 UK Election, and almost a hundred more seats than Labour? Numerous hypotheses have been put forward, often centred around the ideas of leadership, economic competence and attracting the ‘aspirational’ voters of ‘middle England.’ If this analysis is correct, it tells Labour something very important about the ground their next leader will need to fight on, and indeed who that leader should be. But is this the whole picture?
This is where the opinion polls can in fact tell us something important. There has certainly been much discussion since the election results were declared about weaknesses in their survey design, but in itself that does not seem sufficient to explain the huge disparity in what actually happened at the polling stations (Tories ahead of Labour by 6.5%) and what happened in the polls (essentially tied). I argue here that a big part of the reason for this disparity is what I term the ‘lethargic Labour‘ effect, i.e. the differential tendency of Labour supporters to stay at home compared to Tory supporters. ‘Lethargic’ is a term I choose carefully for its association with apathy and general passivity, and it is a factor which I believe has huge implications for political strategy in the years ahead.
To understand this, it is instructive to look to the exit poll, which was conducted at polling stations with people who had actually voted. This was much more accurate than the other polls, including those conducted during Election Day over the telephone or online, and showed a much lower Labour share of the vote. A dominant explanation for this disparity is that there was a significant difference in the number of those who declared they had voted Labour or that they would vote Labour and those who actually did vote.
This ‘lethargic Labour’ effect is quite different to the so-called ‘shy Tory’ effect which was advanced as part of the explanation for the polling meltdown of 1992, when the Conservatives in that year’s General Election were similarly under-estimated in the opinion polls. This ‘shy Tory’ effect is the idea that Tories in particular were shy of revealing their voting intention to pollsters. Yet in 2015 we would expect, if this were a real effect, to have seen it displayed in under-performance by the Tories in telephone polls compared to the relatively more anonymous setting of online polls. There is no such evidence, if anything the reverse being the case for much of the polling cycle.
I am not proposing that the idea of ‘lethargic Labour’ supporters offers the whole explanation for the Tory victory. There is also a historically well-established late swing to incumbents, which cannot be blamed on the raw polls, but is sometimes built into poll-based forecasting models which can account for some of the differential, and there is additionally late tactical switching to consider, where an elector, when face to face with an actual ballot paper, casts a vote to hinder a least preferred candidate.
Interestingly, the betting markets significantly out-performed the polls and also sophisticated modelling based on those polls which allowed for late swing, but they beat the latter somewhat less comprehensively, at least at constituency level. At national aggregated level, the betting markets beat both very convincingly, though the swing-adjusted polls performed rather better than the published polls.
So what does this tell us? It suggests that there was indeed a late swing to the Tories, as well as probably a late tactical swing, both of which were picked up in the betting markets in advance of the actual poll. But the scale of the victory (at least compared to general expectations) was not fully anticipated by any established forecasting methodology. This suggests that there was an extra variable, which was not properly factored in by any forecasting methodology. This extra variable, I suggest, is the ‘lethargic Labour’ supporters, who existed in far greater numbers than was generally supposed.
To the extent that this explanation of the Tory majority prevails, it has profound implications for the strategy of the Labour Party over the next few years in seeking to win office.
It tells us that if Labour are to win the next election, a strategy will have to be devised which motivates their own supporters to actually turn out and vote. In other words, a strategy must be devised which attracts these ‘lapsed Labour’ voters, as I term them, into active Labour voters, which inspires the faithful to get out of their armchairs and into the polling pews. If they can’t construct an effective strategy to do that, it doesn’t really matter how effective their leader is, how economically competent they are seen to be, how well they appeal to the ‘aspirational’ voter. It is very unlikely that Labour will be able to win.
In summary, the Labour Party will need to motivate their more ‘lethargic’ supporters to actually show that support in the ballot box, will need to convert their supporters from being ‘lapsed’ voters into actual voters. If they can do that, the result of the next election is wide open.
If the opinion polls had proved accurate, we would have been woken up on the morning of May 8, 2015, to a House of Commons in which the Labour Party had quite a few more seats than the Conservatives, and by the end of the day the country would have had a new Prime Minister called Ed Miliband. This didn’t happen. Instead the Conservative Party was returned with almost 100 more seats than Labour and a narrow majority in the Commons. So what went wrong? Why did the polls get it so wrong?
This is not a new question. The polls were woefully inaccurate in the 1992 General Election, predicting a Labour victory, only for John Major’s Conservatives to win by a clear seven percentage points. While they had performed a bit better since, history repeated itself this year.
So what is the problem and can it be fixed? A big issue, I believe, is the methodology used. Pollsters simply do not make any effort to duplicate the real polling experience. Even as Election Day approaches, they very rarely identify to those whom they survey who the candidates are, instead simply prompting party labels. This tends to miss a lot of late tactical vote switching. Moreover, the filter they use to determine who will actually vote as opposed to say they will vote is clearly faulty, which can be seen if we compare the actual voter turnout figures with those projected in the polling numbers. Almost invariably, they over-estimate how many of those who say they will vote do actually vote. Finally, the raw polls do not make allowance for what we can learn from past experience as to what happens when people actually make the cross on the ballot paper compared to their stated intention. We know that there tends to be a late swing to the incumbents in the privacy of the polling booth. For this reason, it is wise to adjust the raw polls for this late swing.
Of all these factors, which was the main cause of the polling meltdown? For the answer, I think we need just look to the exit poll, which was conducted at polling stations with people who had actually voted. This exit poll, as in 2010, was quite accurate, while similar exit-style polls conducted during polling day over the telephone or online with those who declared they had voted or were going to vote failed pretty much as spectacularly as the other final polls. The explanation for this difference can, I believe, be traced to the significant difference in the number of those who declare they have voted or that they will vote and those who actually do vote. If this difference works particularly to the detriment of one party compared to another, then that party will under-perform in the actual vote tally relative to the voting intentions declared on the telephone or online. In this case, it seems a very reasonable hypothesis that rather more of those who declared they were voting Labour failed to actually turn up at the polling station than was the case with declared Conservatives. Add to that late tactical switching and the well-established late swing in the polling booth to incumbents and we have, I believe, a large part of the answer.
Interestingly, those who invested their own money in forecasting the outcome performed a lot better in predicting what would happen than did the pollsters. The betting markets had the Conservatives well ahead in the number of seats they would win right through the campaign and were unmoved in this belief throughout. Polls went up, polls went down, but the betting markets had made their mind up. The Tories, they were convinced, were going to win significantly more seats than Labour.
I have interrogated huge data sets of polls and betting markets over many, many elections stretching back years and this is part of a well-established pattern. Basically, when the polls tell you one thing, and the betting markets tell you another, follow the money. Even if the markets do not get it spot on every time, they will usually get it a lot closer than the polls.
So what can we learn going forward? If we want to predict the outcome of the next election, the first thing we need to do is to accept the weaknesses in the current methodologies of the pollsters, and seek to correct them, even if it proves a bit more costly. With a limited budget, it is better to produce fewer polls of higher quality than a plethora of polls of lower quality. Then adjust for known biases. Or else, just look at what the betting is saying. It’s been getting it right since 1868, before polls were even invented, and continues to do a pretty good job.
Updated June 1, 2015. This was the most polled election in British history, and most projections based on the polls suggested that Labour would finish with most seats in the House of Commons. But there is another way to predict elections, by looking at the bets made by people gambling on them. The betting markets were saying for months that the Conservatives would win a lot more seats than Labour at the election of May, 2015. So where should we be looking for our best estimate of what is actually going to happen in an election, to the polls or to the markets? It’s a question that we have been considering actively in the UK for nearly 30 years. We can trace the question to July 4, 1985, for that is the day that the political betting markets finally came of age in this country. A by-election was taking place in the constituency of Brecon and Radnor, a semi-rural corner of Wales, and at the time the key players, according to both the betting markets and the opinion polls, were the Labour and Liberal candidates. Ladbrokes were making the Liberal the odds-on favourite. But on the very morning of the election a poll by MORI gave the Labour candidate a commanding 18% lead. Meanwhile, down at your local office of Ladbrokes the Liberal stubbornly persisted as the solid odds-on favourite. So we had the bookmaker saying black and the pollster white, or more strictly yellow and red. And who won? It turned out to be the Liberal, and of course anyone who ignored the pollster and followed the money. Since then, the betting markets have called it correctly in every single UK general election. While this may be a surprise to many, it will be much less so to those who had followed the history of political betting markets in the US, which correctly predicted (according to a famous study) almost every single US Presidential election between 1868 and 1940. In only one year, 1916, did the candidate favoured in the betting the month before the election end up losing, and that in a very tight race, and even then the market got it right on the day. The power of the betting markets to assimilate the collective knowledge and wisdom of those willing to back their judgement with money has only increased in recent years as the volume of money wagered has risen dramatically, the betting exchanges alone seeing tens of millions of pounds trading on a single election. Indeed, in 2004 one betting exchange actually hit the jackpot when their market favourite won every single state in that year’s election. This is like a tipster calling the winner of 50 football matches in a row simply by naming the favourite. The power of the markets has been repeated in every Presidential election since. Two weeks before the 2005 UK general election, buoyed already by the track record of the markets in forecasting UK elections, and that 2004 prediction miracle, I was sufficiently confident, when asked by The Economist, to call the winner and the seat majority in the 2005 UK General Election over two weeks out. My prediction of a 60-seat majority for the Labour Party, repeated in an interview on the BBC Today programme, was challenged in a BBC World Service debate by a leading pollster, who wanted to bet me that his figure of a Labour majority of over 100 was a better estimate. I declined the bet and saved him some money. The Labour majority was 66 seats. The assumption here is that the collective wisdom of many people is greater than the conclusions of a few. Those myriad people feed in the best information and analysis they can because their own financial rewards depend directly upon this. And it really is a case of ‘follow the money’ because those who know the most, and are best able to process the available information, tend to bet the most. Moreover, the lower the transaction costs (the betting public do not pay tax on their bets in the UK) and information costs (in never more plentiful supply due to the Internet) the more efficient we might expect betting markets to become in translating information today into forecasts of tomorrow. For these reasons, modern betting markets are likely to provide better forecasts than they have done ever before. This is not to say that betting markets are always exactly right, and pollsters are always hopelessly wrong, but when they diverge the overwhelming weight of evidence suggests that it is to the betting markets that we should turn. The same happened in 2010, where the betting markets for weeks were strongly predicting a hung parliament, while the polls swung from at one point having the Liberal Democrats in the lead to in another case putting the Tories a whole twelve points up on election day.
Indeed, in an event as big and recent as the 2012 US presidential election, no less a name than Gallup called it for Mitt Romney, and the national polling average had the candidates essentially tied. Meanwhile, Barack Obama was very short odds-on to win.
Last year’s Scottish referendum is another example. While the polls had it very tight, and with more than one poll calling it for independence, the betting markets were always pointing solidly to a No. The mismatch between the polls and the result echoed the 1995 Quebec separation referendum in Canada. There the final polling showed ‘YES to separation’ with a 6% lead. In the event, ‘No to separation’ won by 1%. We happen to know that one very large trader in the Scottish referendum markets had this, among other things, very much in mind, when he placed his £900,000 to win a net £193,000. The point is that people who bet in significant sums on an election outcome will usually have access to all the polling evidence and their actions take into account past experience of how good different pollsters are, what tends to happen to those who are undecided when they actually vote, and even sophisticated consideration of what might drive the agenda between the dates of the latest polling surveys and polling day itself. All of this is captured in the markets but not in the polls. To some this is magic. For example, I was at a conference in the US when an American delegate, totally dumbfounded that we are allowed to bet on this type of thing in the UK, and that we would anyway be mad enough to do so, asked me who would win the Greek election. I only needed to spend 30 seconds on my iPhone to tell her that Syriza were sure things. She couldn’t believe a market could tell me that. The same thing happened when I announced two weeks before the recent election in Israel that Netanyahu was already past the post. The polls pointed the other way, but the global prediction markets had Benjamin Netanyahu as a clear and strong favourite to win. On declaring victory, he declared that he had won against the odds. Not so. He had in fact won against the polls. To be fair to the opinion polls, they were onside in the Greek election, as they were in the French and Australian elections. That is good. The real question, though, is whom to believe when they diverge. In those cases, there is very solid evidence, derived from the interrogation of huge data sets of polls and betting trades going back many years, much of which I have undertaken myself, that the markets prevail. To be still fairer to the pollsters, they are not claiming to be producing a forecast. They are measuring a snapshot of opinion, though we are have to be careful about his ‘snapshot defence’, as I term it, as sometimes this can be used as cover for a poor methodology. In any case, those inhabiting the betting markets are certainly trying to produce a forecast, so we would to that extent hope that they would be better at it. Moreover, the polls are used by those trading the markets to improve their forecasts, so they are potentially a valuable input. But they are only one input. Those betting in the markets have access to so much other information as well, including informed political analysis, statistical modelling, canvass returns, and so on. I say that the polls are potentially a valuable input. The most recent election in the UK, on May 7, 2015, demonstrated that this is certainly not always the case. In that election, the polls, even on the day, showed it neck and neck in vote share, consistent with the Labour Party winning significantly more seats. The betting markets, meanwhile, always had the Conservatives well ahead in the seats tally, and were unmoved even by last minute polls indicating a late swing to Labour. Take a comparison of the polls and betting markets three days before polling. The polls were indicating most seats for Labour while the betting markets had the Conservatives as short as 6 to 1 on favourites to win most seats, i.e. £6 to win £1. That’s confidence. In other words, those who put their money on the line were pretty sure what was going to happen all along. Since then we have had the ‘gay marriage’ referedum in Ireland. The polls were pointing to a decisive 70%-30% win for YES. The betting markets had the line at 60-40. In the event, YES won by 62-38. In conclusion, there is a growing belief that betting markets will become more than just a major part of our future. Properly used they will, more importantly, be able to tell us what that future is likely to be. We seem therefore to have created, almost by accident, a ‘high-tech’ crystal ball that taps into the accumulated expertise of mankind and makes it available to all. In this sense, the market is like one mind that combines the collective wisdom of everybody. In this brave new world of prediction markets, it seems only sensible to make the most of it.
Follow on Twitter: @leightonvw
If you add up 1 and 2, what do you get? The answer is 3. Ok. Let’s go one step further. What if you add up 1 and 2 and 3? What do you get now? Now the answer is 6. Now 1 plus 2 plus 3 plus 4. That sums to 10. Now what if I do this for ever, in other words add up all the natural numbers right to infinity? What do I get? Most people say it is infinity. Mathematicians often say that there is no sum because technically you can’t sum a ‘divergent series’, as opposed to a series which converges to a number (such as 1+1/2+1/4+1/8+…, which converges to 2).
But let’s be ambitious and see where we get.
Let’s start simple and add up the following series:
1-1+1-1+1-1+1-1+1-1+… to infinity.
What is this?
If you stop at an odd step in the series, such as the first or third or fifth step, the series sums to 1. But if you stop at an even step, say the second or fourth or sixth, the series sums to 0. Both are equally likely, so it is intuitively obvious that we can take the average of 1 and 0, which is 0.5 as the solution of this equation.
For those who aren’t convinced by the obvious, however, we can show it a little more rigorously like this:
Let S = 1-1+1-1+1-1 …….
So, 1-S = 1 – (1-1+1-1+1-1 …) = 1-1+1-1-1-1…
So, 1-S = S
So, 2S=1
Therefore, S = ½.
We can also show it by the method of averaging partial sums, which I’ve added in an appendix to this post, as the third method.
So there are three different ways to demonstrate that the series: 1-1+1+1-1+1-… equals 0.5.
Now that we have established this, the task of calculating the solution to:
1+2+3+4+5… becomes quite straightforward.
So we have established that 1-1+1-1+1-1+… = ½
We’ll call this series S1.
But what if we want to calculate S2, which is the series 1-2+3-4 + ….?
The way to do this is to add it to itself, to get 2.S2
1-2+3-4 +5- … + (1-2+3-4+5…)
The easiest way to do this is to move the second series one step along, which is fine as it is an infinite series. The start with 1 and now add up each pair of the remaining terms.
So we get:
1 + (-2+1) + (3-2) + (-4+3) + (5-4) + … = 1-1+1-1+1 ………….
But we have seen this series before. It is S1, and is equal to ½.
So, 2.S2 = ½
Therefore, S2 = ¼
Now what we are trying to sum is 1+2+3+4+5+6+…….
Let us call this S.
So, S – S2 = 1+2+3+4+5+6+… – (1-2+3-4+5-6…)
So, S-S2 = 0+4+0+8+0+12+…
This series is identical to: 4+8+12+16+20+24+…
This is 4 x (1+2+3+4+5+6+…)
In other words, S-S2 = 4S
We know already that S2 = ¼
Therefore, S-1/4=4S
So, 3S = -1/4
S= -1/12
And that is the proof that the sum of all the natural numbers up to infinity equals -1/12.
Who said it’s infinity? Who said you can’t sum divergent series? It’s got a solution and it’s the only meaningful one. Add up all the positive integers up to infinity and you get a negative number, -1/12.
There is no mathematical sleight of hand here. It is a properly derived solution, and we know from everything we understand about the laws of modern physics that it works in explaining the real world.
My next question is to ask you whether infinity is odd or even. What happens if I press the number 1 after 1 minute, then zero after a further 30 seconds, then 1 again after a further 15 seconds, then zero after a further 7.5 seconds and so on. What number am I pressing at the precise end of two minutes? Am I pressing 1 or zero, or both simultaneously, or neither. Imagine this was a magic light bulb that never blew. At the end of precisely 2 minutes, would it be on or off or both on and off? Or would it be -1/12 on and +1/12 off, or vice-versa.
Makes you think!
Thanks, by the way, to the excellent people on the Numberphile channel on YouTube, who inspired my interest in this.
References:
http://www.nottingham.ac.uk/~ppzap4/response.html
Appendix:
In the series, 1-1+1-1+1-1 …
The first term = 1.
The sum of the first and second terms = 1-1 = 0
The sum of the first, second and third terms = 1-1+1=1
The sum of the first, second, third and four terms = 1-1+1-1 =0
And so on.
So the series of these sums (known as partial sums) = 1,0,1,0 …
Now, averaging these partial sums gives the following series:
1 divided by 1, 1+0 divided by 2, 1+0+1 divided by 3, 1+0+1+0 divided by 4, etc.
This works out as:
1, ½, 2/3, ½, 3/5, ½, 4/7 …
If we continue this series, we end up with ½, as the odd numbered terms get ever smaller, and eventually vanishingly so, leaving us just with 1/2.
This method of averaging partial sums to derive the total sum is well-established and can be used, for example, to calculate the sum of 1+1/2+1/4+1/8+ …
This can be shown to converge on 2 using the same method.
Follow on Twitter: @leightonvw
When two parties to a discussion differ, it is useful, in seeking to resolve the ‘argument’, to determine from where the differences arise, and whether these differences are resolvable in principle. The reason for the difference might be that the parties to the difference have access to different evidence, or else interpret that evidence differently. Another possibility is that one of the parties is applying a different or better process of non-evidence based reasoning to the other. Finally, the differences might arise from each party adopting a different axiomatic starting point. So, for example, if two parties differ in a discussion on euthanasia or abortion, or even the minimum wage, with one party strongly in favour of one side of the issue and the other strongly opposed, it is critical to establish whether this difference is evidence-based, reason-based, or derived from axiomatic differences. We are assuming here that the stance adopted by each party on an issue is genuinely held, and is not part of a strategy designed to advance some other objective or interest. The first thing is to establish whether any amount of evidence could in principle change the mind of an advocate of a position. If not, that leads us to ask where the viewpoint comes from. Is it purely reason-based, in which case (in the sense I use the term) it should in principle be either provable or demonstrably obvious to any rational person who holds a different view, without the need to appeal to evidence. Or else, is the viewpoint held axiomatically, so it is not refutable by appeal to reason or evidence? If the different viewpoints are held axiomatically by the parties to the difference, there the discussion should fall silent. If the differences are not held axiomatically, both parties should in principle be able to converge on agreement. So the question reduces to establishing whether the differences arise from divergences in reasoning, which should be resolvable in principle, or else by differences in access to evidence or proper evaluation of the evidence. Again, the latter should be resolvable in principle. In some cases, a viewpoint is held almost but not completely axiomatically. It is therefore in principle open to an appeal to evidence and/or reason. The bar may be set so high, though, that the viewpoint is in practice axiomatically held. If only one side to the difference holds a view axiomatically, or as close as to make it indistinguishable in practical terms, then the views could in principle converge by appeal to reason and evidence, but only converge to one side, i.e. the side which is holding the view axiomatically. This leads to a situation in which it is in the interest of a party seeking to change the view of the other party to conceal that their viewpoint is held axiomatically, and to represent it as reason-based or evidence-based, but only where the other party is not known to also hold their divergent view axiomatically. This leads to a game-theoretic framework where the optimal strategy, in a case where both parties know that the other party holds a view axiomatically, is to depart the game. In all other cases, the optimal strategy depends on how much each party knows about the drivers of the viewpoint of the other party, and the estimated marginal costs and benefits of continuing the game in an uncertain environment. It is critical in attempting to resolve such differences of viewpoint to determine whence they arise, therefore, in order to determine the next step. If they are irresolvable in principle, it is important to establish that at the outset. If they are resolvable in principle, setting this framework out at the beginning will help identify the cause of the differences, and thus help to resolve them. What applies to two parties is generalizable to any number, though the game-theoretic framework in any particular state of the game may be more complex. In all cases, transparency in establishing whether each party’s viewpoint is axiomatically held, reason-based or evidence-based, is the welfare-superior environment, and should be aimed for by an independent facilitator at the start of the game. Addressing differences in this way helps also to distinguish whether views are being proposed out of conviction, or whether they are being advanced out of self-interest or as part of a strategy designed to achieve some other objective or interest.
Update at: https://leightonvw.com/2015/08/19/who-do-views-differ-is-there-any-reason-for-it/
Follow on Twitter: @leightonvw
December 21st, 2018 is the shortest day of the year, at least in the UK, located in the Northern hemisphere of our planet.
So does that mean that the mornings should start to get lighter after today (earlier sunrise), as well as the evenings (later sunset). Not so, and there’s a simple reason for that. The length of a solar day, i.e. the period of time between the solar noon (the time when the sun is at its highest elevation in the sky) on one day and the next, is not 24 hours in December, but about 30 seconds longer than that.
For this reason, the days get progressively about 30 seconds longer throughout December, so that by the end of the month a standard 24-hour clock is lagging roughly 15 minutes behind real solar time.
Let’s say just for a moment that the hours of sunlight (the time difference between sunrise and sunset) stayed constant through December. This means that a 24-hour clock which timed sunset at 3.50pm one day would be 30 seconds slow by 3.50pm the next day. The solar day would be 30 seconds longer than this, so the sun would not set the next day till 3.50pm and 30 seconds. After ten days the sun would not set till 3.55pm according to the 24-hour clock. So the sunset would actually get later through all of December. For the same reason, the sunrise would get later through the whole of December.
In fact, the sunset doesn’t get progressively later through all of December because the hours of sunlight shorten for about the first three weeks. The effect of this is that the sun would set earlier and rise later.
These two things (the shortening hours of sunlight and the extended solar day) work in the opposite direction. The overall effect is that the sun starts to set later from a week or so before the shortest day, but doesn’t start to rise earlier till about a week or so after the shortest day.
So the old adage that that the evenings will start to draw out after the end of the third week of December or so, and the mornings will get lighter, is false. The evenings have already been drawing out for several days before the shortest day, and the mornings will continue to grow darker for several days more.
There’s one other curious thing. The solar noon coincides with noon on our 24-hour clocks just four times a year. One of those days is Christmas Day! So set your clock to noon on December 25th, look up to the sky and you will see the sun at its highest point. Just perfect!
Links
http://www.timeanddate.com/astronomy/uk/nottingham
http://www.bbc.co.uk/news/magazine-30549149
http://www.rmg.co.uk/explore/astronomy-and-time/time-facts/the-equation-of-time
http://en.wikipedia.org/wiki/Solar_time
http://earthsky.org/earth/everything-you-need-to-know-december-solstice
Bayes’ theorem concerns how we formulate beliefs about the world when we encounter new data or information. The original presentation of Rev. Thomas Bayes’ work, ‘An Essay toward Solving a Problem in the Doctrine of Chances’, was given in 1763, after Bayes’ death, to the Royal Society, by Mr. Richard Price. In framing Bayes’ work, Price gave the example of a person who emerges into the world and sees the sun rise for the first time. At first, he does not know whether this is typical or unusual, or even a one-off event. However, each day that he sees the sun rise again, his confidence increases that it is a permanent feature of nature. Gradually, through a process of statistical inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches 100 per cent. The Bayesian viewpoint is that we learn about the universe and everything in it through approximation, getting closer and closer to the truth as we gather more evidence. The Bayesian view of the world thus sees rationality probabilistically.
As such, Bayes’ perspective on cause and effect can be contrasted with that of David Hume, the logic of whose argument on this issue is contained in ‘An Enquiry Concerning Human Understanding’. According to Hume, we cannot justify our assumptions about the future based on past experience unless there is a law that the future will always resemble the past. No such law exists. Therefore, we have no fundamentally rational support for believing in causation. Bayes instead applies and formalizes the laws of probability to the science of reason, to the issue of cause and effect.
I propose that we apply the same Bayesian perspective to Immanuel Kant’s duty-based ‘Categorical Imperative.’ This can be summarised in the form: ‘Act only according to that maxim which you could simultaneously will to be a universal law.’ On this basis, to lie or to break a promise doesn’t work as a practical imperative, because if everyone lied or broke their promises, then the very concept of telling the truth or keeping one’s promises would be turned on its head. A society that worked according to the universal principle of lying or promise-breaking would be unworkable. Kant thus argues that we have a perfect duty not to lie or break our promises, or indeed do anything else that we could not justify being turned into a universal law.
The problem with this approach in many eyes is that it is too restrictive. If a crazed gunman demands that you reveal which way his potential victim has fled, you must not lie to save him because this could not be universalisable as a rule of behaviour.
I propose that the application of a justification argument can solve the problem. This argument from justification is that you have no duty to respond to any request which is posed without reasonable appeal to duty. So, in this example, the gunman has no reasonable appeal to duty from you, so you can make an exception to the general rule.
Why is this consistent with the practical implications of Kant’s ‘universal law’ maxim? It’s an issue of probability. In the great majority of situations, you have no defence based on the argument from justification for lying or breaking a promise. So the universal expectation is that truth-telling and promise-keeping is overwhelmingly probable. The more often this turns out to be true in practice, the closer this approach converges on Kant’s absolute imperative by a process of simple Bayesian updating.
In a world in which ethics is indeed based on duty, it is this broader conception of duty which, I propose, should inform our actions.

