Skip to content

Can we predict the Nobel Prize winners? You can bet on it.

When the Belarusian writer Svetlana Alexievich won this year’s Nobel Prize for Literature, it was not unexpected. She was not only the clear favourite with the bookmakers but had traded as one of the leaders in the betting in the previous two years.

While firms lay odds on the literature and peace prizes, there are no betting lines available for the Nobel Prizes in physics, chemistry and medicine. Instead, there is an organised platform which seeks to predict winners based on research citations.

Betting: from Hollywood to the Vatican

This has been a very good year for favourites in awards contests. The favourite in the betting won almost every single one of the 24 Oscar categories at this year’s Academy Awards. This domination of the favourites has been documented in politics for nearly 150 years, ever since hot favourite Ulysses S Grant strolled to the US presidency in 1868. The favourite in the betting has won almost every single presidential election held since.

But the Nobel Prize deliberations are quite different from a political election or even a Hollywood awards ceremony. Instead, they are a little more like a papal conclave, where the deliberations are secretive and there is no defined shortlist of nominees. Betting on papal conclaves has been formally recorded from as early as 1503. In that year, the brokers in the Roman banking houses who offered odds on who would be elected Pope made Cardinal Francesco Piccolomini the clear favourite. It was no surprise, therefore, when he went on to become Pope Pius III.

Since then the betting markets have had a mixed record of success in predicting the winner. For example, Cardinal Ratzinger was a warm favourite to be elected pope in 2005, and duly became Pope Benedict. The election of Cardinal Bergoglio as Pope Francis, on the other hand, came as more of a surprise to the markets.

Betting on processes that take place behind closed doors also happens outside the church. In 2009, crowdsourced fantasy league (or “prediction market”) launched an attempt to peer behind the doors of the US Supreme Court, predicting its deliberations – a market still going strong today. The Supreme Court might be particularly suitable for a prediction market, in that not only is there a relatively small number of decision makers, but the universe of possible outcomes is also very limited. Predicting the Nobel Prize announcements might be expected to be somewhat more difficult.

So how do the betting companies compile their odds when it comes to the Nobels? Ladbrokes has said that, in the absence of information, the best way is consulting literary contacts and following relevant online discussions. This is despite the fact that it only takes about ₤50,000 in bets on the Nobel in literature, compared with a couple of million for a big football match.

Patchy record

How well have the markets performed to date? For the Sveriges Riksbank Prize in Economic Sciences, established in 1968 by Sweden’s central bank and considered an unofficial “Nobel”, the most ironic failure of a sort came in 2009, when the betting market offered by Ladbrokes had Eugene Fama, a pioneering exponent of the theory of efficient markets, as the solid 2 to 1 favourite. Assuming the market was truly efficient in respect of all relevant information, we might have expected him to be well up there among the top contenders. But the prize was shared by Elinor Ostrom and Oliver Williamson, both of whom were trading as 50 to 1 longshots before the announcement. Fama did go on to share the Nobel Prize four years later.

On the other hand, Harvard University had already set up its own dedicated economics prize prediction market, which did much better than Ladbrokes by making Oliver Williamson one of the favourites. In 2010, Peter Diamond shared the prize after having been listed as one of the favourites by Harvard.

Of the others in the top eight in 2010, Jean Tirole went on to win in 2014, Robert Shiller and Lars Peter Hansen in 2013. Thomas Sargent and Christopher Sims, who shared the 2011 prize, were among the favourites in the 2008 Harvard prediction market, which has since closed down.

Most of the market based predictions, however, focus on the Nobel Prizes for Literature and Peace. In 2014, French writer Patrick Modiano won the Literature Prize. Before the announcement, Modiano was trading as a reasonably well fancied joint fourth favourite. The previous year, Canadian Alice Munro was heavily backed into second favourite before claiming the prize. In 2011, Tomas Transtomer won the Literature Prize having been clear favourite in 2010.

The peace prize, which is awarded by a committee of five people who are chosen by the parliament of Norway, is slightly more complicated as awards are sometimes given to organisations rather than individuals. This also makes it less satisfying for potential market players. Still, the 2014 Nobel Peace prize was shared by Malala Yousafzai and Kailash Satyarthi. Malala had actually been backed to win in the previous year.
The physics, chemistry and medicine prizes, on the other hand, have not really attracted market attention to date, probably because it is too niche for the regular player. Instead this role has been taken up by Thomson Reuters, which claims to have identified 37 Nobel Prize winners since 2002, on the basis of an analysis of scientific research citations within the Web of Science. As an interesting development, Thomson Reuters has also now established a People’s Choice Poll, more akin to the “wisdom of crowds” methodology of a prediction market. Scientific society Sigma Xi has a prediction contest that enables people to vote for their favourite.

2015 Nobels: the verdict

This outline of the past few years is pretty much par for the course in the history of Nobel predictions. Far from perfect, but not at all unimpressive. Interestingly, the market is often a better predictor of future Nobel laureates than for that particular year.

This year, although the market got the Literature Prize spot on, it had not predicted that the Tunisian National Dialogue Quartet would win the peace prize. So well done to those who placed a bet on “none of the above”. It was trading as close second favourite to Angela Merkel on the PredictIt prediction market before the announcement.

Thomson Reuters got the 2015 physics, chemistry and medicine prizes wrong. This year it also highlighted Richard Blundell, John List and Charles Manski as the leading candidates for the economics prize, making special note of the former, who also won their People’s Choice poll. There was no organised betting on economics this year. This year’s economics Nobel actually went to Angus Deaton. Deaton, currently Dwight D. Eisenhower Professor of Economics and International Affairs at Princeton (formerly of Cambridge and Bristol universities) for his analysis of consumption, poverty and welfare.

So what will the prediction industry look like in ten years? On current trends, it will have grown up a lot. The science of forecasting and the power of prediction markets are currently growing apace. Will there ever come a time, I wonder, when we don’t need to wait for the announcement, but instead just look to the odds? Maybe we should set up a prediction market to answer that question.

Why Do Views Differ? Bad reasoning, bad evidence or bad faith?

Follow on Twitter: @leightonvw

When two parties to a discussion differ, it is useful, in seeking to resolve this, to determine from where the difference arises, and whether it is resolvable in principle.

The reason might be that the parties to the difference have access to different evidence, or else interpret that evidence differently. Another possibility is that one of the parties is applying a different or better process of reasoning to the other. Finally, differences might arise from each party adopting a different axiomatic starting point. So, for example, if two parties differ in a discussion on euthanasia or abortion, or even the minimum wage, with one party strongly in favour of one side of the issue and the other strongly opposed, it is critical to establish whether this difference is evidence-based, reason-based, or derived from axiomatic differences. We are assuming here that the stance adopted by each party on an issue is genuinely held, and is not part of a strategy designed to advance some other objective or interest.

The first thing is to establish whether any amount of evidence could in principle change the mind of an advocate of a position. If not, that leads us to ask where the viewpoint comes from. Is it purely reason-based, in which case (in the sense I use the term) it should in principle be either provable or demonstrably obvious to any rational person who holds a different view. Or else, is the viewpoint held axiomatically, so it is not refutable by appeal to reason or evidence? If the different viewpoints are held axiomatically by the parties to the difference, there the discussion should fall silent.

Before doing so, however, we should determine whether the conflict in beliefs between one person and another actually is axiomatic. The first question is to ask whether the beliefs are held by one or more parties as self-evident truths, or are they open to debate based on reason or evidence? In seeking to clarify this, it is instructive to determine whether the confidence of the parties to the difference could in principle or practice be shaken if others who might be considered equally qualified to hold a view on the matter disagree. In other words, when other people’s beliefs conflict with ours, does that in any way challenge the confidence we hold in these beliefs?

Let us pose this in a slightly different way. If two parties hold conflicting views or beliefs, and each of these parties has no good reason to believe that they are the person more likely to be right, does that at least make them doubt their view, even marginally? Does it perhaps give a reason to doubt that either party is right? The more closely the answer to these questions converges on the negative, the more likely it is that the divergent views or beliefs are held axiomatically.

If the differences are not held axiomatically, both parties should in principle be able to converge on agreement. So the question reduces to establishing whether the differences arise from divergences in reasoning, which should be resolvable in principle, or else by differences in access to evidence or proper evaluation of the evidence. Again, the latter should be resolvable in principle. In some cases, a viewpoint is held almost but not completely axiomatically. It is therefore in principle open to an appeal to evidence and/or reason. The bar may be set so high, though, that the viewpoint is in practice axiomatically held.

If only one side to the difference holds a view axiomatically, or as close as to make it indistinguishable in practical terms, then the views could in principle converge by appeal to reason and evidence, but only converge to one side, i.e. the side which is holding the view axiomatically. This leads to a situation in which it is in the interest of a party seeking to change the view of the other party to conceal that their viewpoint is held axiomatically, and to represent it as reason-based or evidence-based, but only where the other party is not known to also hold their divergent view axiomatically. This leads to a game-theoretic framework where the optimal strategy, in a case where both parties know that the other party holds a view axiomatically, is to depart the game.

In all other cases, the optimal strategy depends on how much each party knows about the drivers of the viewpoint of the other party, and the estimated marginal costs and benefits of continuing the game in an uncertain environment. It is critical in attempting to resolve such differences of viewpoint to determine whence they arise, therefore, in order to determine the next step. If they are irresolvable in principle, it is important to establish that at the outset. If they are resolvable in principle, setting this framework out at the beginning will help identify the cause of the differences, and thus help to resolve them. What applies to two parties is generalizable to any number, though the game-theoretic framework in any particular state of the game may be more complex.

In all cases, transparency in establishing whether each party’s viewpoint is axiomatically held, reason-based or evidence-based, is the welfare-superior environment, and should be aimed for by an independent facilitator at the start of the game. Addressing differences in this way helps also to distinguish whether views are being proposed out of conviction, or whether they are being advanced out of self-interest or as part of a strategy designed to achieve some other objective or interest.

So in seeking to derive a solution to the divergence of view or belief, we need to ascertain whether the differences are actually held axiomatically. To do this, we need to examine the source of the belief, and whether this can be dissected by appeal to reason or evidence.

To do so, we need to ask whether there are absolute ethical imperatives, which can be agreed upon by all reasonable people, or not. For example, is it reasonable to agree that people should not be treated merely as means but as ends in themselves?

Of course, people might out of self-interest choose to treat others as means, disregarding their status as human beings of equal worth, but this is different to holding that to be ethically true. The appeal to a Rawlsian ‘veil of ignorance’ argument helps here, where each person must choose whether to hold their ethical framework without knowing in advance who or where they are, poor or rich, able-bodied or disabled, male or female, free or slave.

In this context, we can ask whether there are ethical principles, views, or beliefs, that all reasonable people could agree with, or that all people could not reasonably reject.

Are there such?

Take, for example, this idea that people should be treated as means, not ends. What if we harm someone in order to prevent a greater harm to someone else? If we do all we can do to minimize harm to them in so doing, are we treating them merely as means? Suppose, for example, you need to jam someone’s foot in a mechanism, so crushing it, in order to save another person from being killed by the mechanism. You are certainly using the other person as means to an end, but if you deliberatively did the least harm to the person commensurate with saving the other person, is that really treating the person as a means, given that you paid full attention to the harmed person’s well-being in saving the life of the other person?

Looked at another way, does treating people as ends and not means imply that their interests should not be sacrificed even if that creates greater overall good? In other words, is it sufficient to take each person’s interests into account or must they be fundamentally and absolutely protected? Does everyone, in other words, have an innate right to freedom, which includes independence from being constrained by another’s choice? Do we have absolute duties we owe each other arising from our equal status as persons? To what extent, in other words, are the rights and freedoms of people absolute as opposed to instrumental, and is it possible to formalize an ethical code which all reasonable people could assent to, or not reasonably reject, around this? If not, axiomatic differences remain possible. To the extent that we can, less so.

More generally, are there absolute ethical principles which hold regardless of consequences, regardless of how much benefit or harm accrues from acting upon them? This reduces to whether we conceive of morality as grounded in relations between people and the duties we owe each other, or whether it is about the relations of people to states of affairs, such as the maximizing of overall well-being, however defined. Such a definition could include happiness, knowledge, creativity, love, friendship, or else simply realisation of personal preferences or desires.

So, to summarize, I suggest that views and beliefs can be usefully classified into fundamental (or axiomatic) ethical imperatives and ethical imperatives based on reason and evidence. While reason-based and evidence-based ethical imperatives can, of course, be influenced by evidence and reason, fundamental ethical imperatives cannot.

In considering ethical imperatives which are duty-based, grounded in moral duties owed by all people to each other, as opposed to the effect upon general well-being, we can benefit from the use of a Bayesian framework.

Bayes’ theorem concerns how we formulate beliefs about the world when we encounter new data or information. The original presentation of Rev. Thomas Bayes’ work, ‘An Essay toward Solving a Problem in the Doctrine of Chances’, gave the example of a person who emerges into the world and sees the sun rise for the first time. At first, he does not know whether this is typical or unusual, or even a one-off event. However, each day that he sees the sun rise again, his confidence increases that it is a permanent feature of nature. Gradually, through a process of statistical inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches 100 per cent. The Bayesian viewpoint is that we learn about the universe and everything in it through approximation, getting closer and closer to the truth as we gather more evidence. The Bayesian view of the world thus sees rationality probabilistically.

I propose that we apply the same Bayesian perspective to Immanuel Kant’s duty-based ‘Categorical Imperative.’ This can be summarised in the form: ‘Act only according to that maxim which you could simultaneously will to be a universal law.’ On this basis, to lie or to break a promise doesn’t work as a general code of conduct, because if everyone lied or broke their promises, then the very concept of telling the truth or keeping one’s promises would be turned on its head. A society that operated according to the universal principle of lying or promise-breaking would be unworkable. Kant thus argues that we have a perfect duty not to lie or break our promises, or indeed do anything else that we could not justify being turned into a universal law.

The problem with this approach in many eyes is that it is too restrictive. If a crazed gunman demands that you reveal which way his potential victim has fled, you must not lie to save him because the lying could not reasonably be universalisable as a rule of behaviour.

I propose that the application of a justification argument can solve the problem. This argument from justification, which I propose, is that we have no duty to respond to anything which is posed without reasonable appeal to duty. So, in this example, the gunman has no reasonable appeal to a duty of truth-telling from us, so we can make an exception to the general rule.

In any case, we need to assess the practical implications of Kant’s ‘universal law’ maxim from a probabilistic perspective. In the great majority of situations, we have no defence based on the argument from justification for lying or breaking a promise. So the universal expectation is that truth-telling and promise-keeping is overwhelmingly probable. The more often this turns out to be true in practice, the closer this approach converges on Kant’s absolute imperative by a process of simple Bayesian updating. As such, this is the ethical default position, a default position based on appeal to rules of conduct which should be universally willable as a general rule of behaviour, or are unreasonable to be rejected as such. Because it is rare that we need to deviate from this, the loss to the general good arising from the reduction in the credibility of truth-telling is commensurately less.

In a world in which ethics is indeed based on duty, I suggest in any case that the broader conception of duty, including the appeal to the argument from justification, should inform our actions. As long as this is clearly formulated within the universal law, i.e. tell the truth except where the person asking you to do so has no right to ask it of you, the core ethical rule of action is not weakened by lying in the crazed gunman example.

But can we use this sort of approach to arrive in principle on an ethical framework to which all reasonable people might ascribe?

I suggest the following.

First, that there are certain principles, most fundamentally that we owe each person a duty of respect to be treated as an end in themselves, based on our equal status as people, and not as a means to an end. This is most clearly seen if we are asked to decide on the merit of this through a ‘veil of ignorance’ as to our position in the world. Secondly, we should adopt ethical codes of behaviour based on the principle that these principles would be agreeable to all reasonable people, or could not be rejected by all reasonable people.

As such, we have Kant’s framework of ethical imperatives, as well as T.M. Scanlon’s idea of a contract between human beings based on what we owe each other, which he summarizes as that everyone ought to follow the principles that no one could reasonably reject.

As such, cheating, lying, breaking promises are uncontroversial as core ethical principles, because they obey our duty to treat others as ends, not means to an end, and because a world in which cheating, lying an breaking promises is not the norm is not a world in which things go for the best. So we should aim to obey these rules. Indeed, by the principle of simple Bayesian updating, we can demonstrate that on the great majority of occasions, this works, and so we converge on these principles of behaviour as the default position. If there arise occasions when it is clear that adherence to the principle does not work for the best, however, we may have a duty to deviate from the default position, but only on the basis of committed and sure deliberation that the exception is warranted. It is not a position to be taken lightly.

The example of the crazed gunman demanding to know where the potential victim has fled is one such example. The value of the duty to the potential victim as a unique human being, as well as the damage to overall well-being, will likely outweigh the value of the consequent reduction in the value of truth-telling, though no decision to deviate from the default position should be taken lightly. There is the additional issue of the argument from justification to consider here, i.e. that the duty to tell the truth is conflicted when the person demanding you tell the truth has no moral justification to do so.

Take another example. By crushing a person’s foot in a sophisticated explosive device, you will save the lives of fifty people. The default position is not to use another person as means, but this conflicts with the duty to protect others as well.

Derek Parfit seeks to reconcile the ethical theories derivable from Kantian deontology, Scanlon-type contractualism, and consequentialism, into a so-called ‘Triple Theory’, i.e.

An act is wrong if and only if, or just when, such acts are disallowed by some principle that is:

a. One of the principles whose being universal laws would make things go best.

b. One of the only principles whose being universal laws everyone could rationally will.

c. A principle that no one could reasonably reject.

To the duty-based approach, I would add the argument from justification, i.e. there is no duty to respond to any request which is posed without reasonable appeal to duty.

The next step is to identify actual examples of differences of view or belief or action, and determine whether we can resolve these differences through a synthesis of re-configured (by appeal to justification) Kantian deontology, contractualism and consequentialism. We could call this Adapted Kantian rule consequentialism, mediated through a Bayesian filter.

To do so, I will consider the well-known stylised examples of the Trolley Problem and the Bridge Problem. In one version of the Trolley Problem, a trolley on a rail track is heading straight for five unsuspecting people, but a switch can be thrown to divert this to hit just one person. Is it right, with no other knowledge, to divert the train? In the Bridge Problem a very heavy man can be pushed off a bridge without his knowledge to prevent a runaway locomotive from striking and killing, say, five people on the track.

Are either of these scenarios consistent with adapted Kantian rule consequentialism? In the case of the Bridge, the default position is clearly that it is wrong to deliberately take someone’s life, which is an extreme case of interfering with the human right to be considered an end and not a means to an end, and would in almost all cases be the default position. Appeal to a consequentialist viewpoint, based on the saving of five lives for one, would seem to conflict with the idea that this sort of action would make things go best if adopted as a general rule of behaviour. A world so structured ethically would very arguably not be one in which things go best. What if the pushing of the man off the bridge crushed an explosive device that would have saved a million lives? The question that arises here is whether it is right to take each case on its merits.

To take each case on its merits, with a view to the actual consequences in each case, is an act utilitarian ethical prescription. This is, however, a very damaging ethical prescription, as it leaves the default rule very seriously, if not fatally weakened, which in the bigger picture would conflict with the objective of making things go best.

This leaves us with a very difficult ethical dilemma. In the first case, saving the five people, it is reasonable to reject the pushing of the man off the bridge as a part of the universal rule that people be treated as ends not means. Killing one man to save a million would mean weakening the default position, just as would the act of judging the right to take an innocent life on a case by case basis..

The Trolley Problem is easier, because it is not a matter of deliberately killing someone, but saving the five by diverting the train from their path. It is unfortuitous that the one person is on the other track, but it is not a deliberate intention to kill that person to save the others. In effect, though, the outcome is the same, so the question is whether the intention matters. It does if we address this in terms of willingness to make the action a universalisable code of behaviour.

In both cases, though, we need to consider the value of the default position, and how much damage is incurred to making things go for the best if we act in such a way as to weaken or even destroy this default position. It is this consideration which, it seems to me, lies at the heart of synthesising deontological (duty-based) and contractualist ethics-based criteria of behaviour and those based on pure consequentialism, such as maximising the greatest happiness of the greatest number. In particular, damage to the default position works to weaken the consequentialist outcome, and the more damage is done to it, the more damage is done on its own terms to the consequentialist measurement of outcome .

It is this default-based ethical calculus which to me synthesises the deontological, contractualist and consequentialist ethical frameworks, and differences of opinion in the proper ethical judgement of good behaviour and bad behaviour derives, it seems to me, from differences in the value attached to the maintenance of this default.

So belief in the absolute value of the default position would mean that it is never right to push the man off the bridge, however many millions are saved.

In setting the value of the default position, this might be set by a calculus based on weighing the overall loss to the sum of well-being from weakening confidence and trust in the default position with the directly caused gain to the well-being of those who benefit from the action. This would be a fundamentally consequentialist view of the world, albeit grounded in the strategic benefits to well-being of a universally trusted default position. The value of the default position might on the other hand be considered absolute, axiomatically held, that it is always wrong, say, to sacrifice an innocent and unwilling person to save any number of lives, or even to use a human being as a means to achieve a goal based on some wider conception of general well-being.

Insofar as these differences are axiomatically held, no resolution can be achieved. It might, on the other hand, be the case that the differences are the result of faulty reasoning or evidence. Either way, consideration through the lens of this default-based ethical framework might help clarify the reason for differences of view, belief and action, and even change those views, beliefs or actions.

One task now is to apply this ethical lens, not least to some of the great moral and touchstone issues which divide opinion, with a view to at least making progress in resolving differences of view, belief and action.

Why is there Something rather than Nothing?

It shouldn’t be possible for us to exist. But we do. That’s the sort of puzzle I like exploring. So I will. Let’s start with the so-called ‘Cosmological Constant.’ This is an extra term added by Einstein in working out equations in general relativity that describe a non-expanding universe. The need for the cosmological constant is required to explain why a static universe doesn’t collapse in upon itself through the action of gravity. It’s true that the force of gravity is infinitesimally small compared to the electromagnetic force, but it has a lot more influence on the universe because all the positive and negative electrical charges in the universe somehow seem to balance out. Indeed, if there were just a 0.00001 per cent difference in the net positive and negative electrical charges within a body, it would be torn apart and cease to exist. This cosmological constant, therefore, is added to the laws of physics simply to balance the force of gravity contributed by all of the matter in the universe. What it represents is a sort of unobserved ‘energy’ in the vacuum of space which possesses density and pressure, which prevents a static universe from collapsing in upon itself. But we now know from observation that galaxies are actually moving away from us and that the universe is expanding. In fact, the Hubble Space Telescope observations in 1998 of very distant supernovae showed that the Universe is expanding more quickly now than in the past. So the expansion of the Universe has not been slowing due to gravity, but has been accelerating. We also know how much unobserved energy there is because we know how it affects the Universe’s expansion. But how much should there be? We can calculate this using quantum mechanics. The easiest way to picture this is to visualize ‘empty space’ as containing ‘virtual’ particles that continually form and then disappear. This ‘empty space’, it turns out, ‘weighs’ 1,093 grams per cubic centimetre. Yet the actual figure differs from that predicted by a factor of 10 to the power of 120. The ‘vacuum energy density’ as predicted is simply 10120 times too big. That’s a 1 with 120 zeros after it. So there is something cancelling out all this ‘dark’ energy, to make it 10 to the power of 120 smaller in practice than it should be in theory. Now this is very fortuitous. If the cancellation figure was one power of ten different, 10 to the power of 119, then galaxies could not form, as matter would not be able to condense, so no stars, no planets, no life. So we are faced with the mindboggling fact that the positive and negative contributions to the cosmological constant cancel to 120 digit accuracy, yet fail to cancel beginning at the 121st digit. In fact, the cosmological constant must be zero to within one part in roughly 10120 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have collapsed upon itself long ago. How likely is this by chance? Essentially, it is the equivalent of tossing a coin and needing to get heads 400 times in a row and achieving it. Go on. Do you feel lucky? Now, that’s just one constant that needs to be just right for galaxies and stars and planets and life to exist. There are quite a few, independent of this, which have to be equally just right, but this I think sets the stage. I’ve heard this called the fine-tuning argument.

In summary, then, if the conditions in the Big Bang which started our Universe had been even a tiniest of a tiniest of a tiny bit different, our Universe would not have been able to exist, let alone allow living beings to exist within it. So why are they so right?

Let us first tackle those (I’ve met a few) who say that if they hadn’t been right we would not have been able to even ask the question. This sounds a clever point but in fact it is not. For example it would be absolutely bewildering how I could have survived a fall out of an aeroplane from 39,000 feet onto tarmac without a parachute, but it would still be a question very much in need of an answer. To say that I couldn’t have posed the question if I hadn’t survived the fall is no answer at all.

Others propose the argument that since there must be some initial conditions, these conditions which gave rise to the Universe and life within it possible were just as likely to prevail as any others, so there is no puzzle to be explained.

But this is like saying that there are two people, Jack and Jill, who are arguing over whether Jill can control whether a fair coin lands heads or tails. Jack challenges Jill to toss the coin 400 times. He says he will be convinced of Jill’s amazing skill if she can toss heads followed by tails 200 times in a row, and she proceeds to do so. Jack could now argue that a head was equally likely as a tail on every single toss of the coin, so this sequence of heads and tails was, in retrospect, just as likely as any other outcome. But clearly that would be a very poor explanation of the pattern that just occurred. That particular pattern was clearly not produced by coincidence. Yet it’s the same argument as saying that it is just as likely that the initial conditions were just right to produce the Universe and life to exist as that any of the other pattern of billions of initial conditions that would not have done so. There may be a reason for the pattern that was produced, but it needs a more profound explanation than proposing that it was just coincidence.

A second example. There is one lottery draw, devised by an alien civilisation. The lottery balls, numbered from 1 to 49, are to be drawn, and the only way that we will escape destruction, we are told, is if the first 49 balls out of the drum emerge as 1 to 49 in sequence. The numbers duly come out in that exact sequence. Now that outcome is no less likely than any other particular sequence, so if it came out that way a sceptic could claim that we were just lucky. That would clearly be nonsensical. A much more reasonable and sensible conclusion, of course, is that the aliens had rigged the draw to allow us to survive!

So the fact that the initial conditions are so fine-tuned deserves an explanation, and a very good one at that. It cannot be simply dismissed as a coincidence or a non-question.

An explanation that has been proposed that does deserve serious scrutiny is that there have been many Big Bangs, with many different initial conditions. Assuming that there were billions upon billions of these, eventually one will produce initial conditions that are right for the Universe to at least have a shot at existing. The creation of complex life is another thing, but at least we have a start.

In this sense, it is like the aliens drawing the balls over and over again, countless times, until the numbers come out in the sequence 1 to 49.

On this basis, a viable Universe could arise out of re-generating the initial conditions at the Big Bang until one of the lottery numbers eventually comes up. Is this a simpler explanation of why our Universe and life exists than an explanation based on a primal cause, and in any case does simplicity matter as a criterion of truth? This is the first question and it is usually accepted in the realm of scientific enquiry. A simpler explanation of known facts is usually accepted as superior to a more complex one.

Of course, the simplest state of affairs would be a situation in which nothing had ever existed. This would also be the least arbitrary, and certainly the easiest to understand. Indeed, if nothing had ever existed, there would have been nothing to be explained. Most critically, it would solve the mystery of how things could exist without their existence having some cause. In particular, while it is not possible to propose a causal explanation of why the whole Universe or Universes exists, if nothing had ever existed, that state of affairs would not have needed to be caused. This is not helpful to us, though, as we know that in fact at least one Universe does exist.

Take the opposite extreme, where every possible Universe exists, underpinned by every possible set of initial conditions. In such a state of affairs, most of these might be subject to different fundamental laws, governed by different equations, composed of different elemental matter. There is no reason in principle, on this version of reality, to believe that each different type of Universe should not exist over and over again, up to an infinite number of times, so even our own type of Universe could exist billions of billions of times, or more, so that in the limit everything that could happen has happened and will happen, over and over again. This may be a true depiction of reality, but it or anything anywhere remotely near it, seems a very unconvincing one. In any case, our sole source of understanding about the make-up of a Universe is a study of our own Universe. On what basis, therefore, can we scientifically propose that the other speculative Universes are governed by totally different equations and fundamental physical laws? They may be, but that is a heroic assumption. Perhaps the laws are the same, but the constants that determines the relative masses of the elementary particles, the relative strength of the physical forces, and many other fundamentals, differ but not the laws themselves. If so, what is the law governing how these constants vary from Universe to universe, and where do these fundamental laws come from? From nothing? It has been argued that absolutely no evidence exists that any other Universe exists but our own, and that the reason that these unseen Universes is proposed is simply to explain the otherwise baffling problem of explaining how our Universe and life within it can exist. That may be so, but we can park that for now as it is still possible that they do exist.

So let’s move on to the admitting the possibility that there are a lot of Universes, but not every conceivable Universe. One version of this is that the other Universes have the same fundamental laws, subject to the same fundamental equations, and composed of the same elemental matter as ours, but differ in the initial conditions and the constants. But this leaves us with the question as to why there should be only just so many Universes, and no more. A hundred, a thousand, a hundred thousand, whatever number we choose requires an explanation of why just that number. This is again very puzzling. If we didn’t know better, our best ‘a priori’ guess is that there would be no Universes, no life. We happen to know that’s wrong, so that leaves our Universe; or else a limitless number of Universes where anything that could happen has or will, over and over again; or else a limited number of Universes, which begs the question, why just that number? Is it because certain special features have to obtain in the initial conditions before a Universe can be born, and that these are limited in number. Let us assume this is so. This only begs the question of why these limited features cannot occur more than a limited number of times. If they could, there is no reason to believe the number of Universes containing these special features would be less than limitless in number. So, on this view, our Universe exists because it contains the special features which allow a Universe to exist. But if so, we are back with the problem arising in the conception of all possible worlds, but in this case it is only our own type of Universe (i.e. obeying the equations and laws that underpin this Universe) could exist billions of billions of times, or more. Again, this may be a true depiction of reality, but it seems a very unconvincing one.

The alternative is to adopt an assumption that there is some limiting parameter to the whole process of creating Universes, along some version of string theory which claims that there are a limit of 10 to the power of 500 solutions (admittedly a dizzyingly big number) to the equations that make up the so-called ‘landscape’ of reality. That sort of limiting assumption would seem to offer at least a lifeline to allow us to cling onto some semblance of common sense.

Let us summarize very quickly. If we didn’t know better, our best guess, the simplest description of all possible realities, is that nothing exists. But we do know better, because we are alive and conscious, and considering the question. But our Universe is far, far, far too fine-tuned, by a factor of billions of billions, to exist by chance if it is the only Universe. So there must be more, if our Universe is caused by the roll of the die, a lot more. But how many more? If there is some mechanism for generating experimental Universe upon Universe, why should there be a limit to this process, and if there is not, that means that there will be limitless Universes, including limitless identical Universes, in which in principle everything possible has happened, and will happen, over and over again.

Even if we accept there is some limiter, we have to ask what causes this limiter to exist, and even if we don’t accept there is a limiter, we still need to ask what governs the equations representing the initial conditions to be as they are, to create one Universe or many. What puts life into the equations and makes a Universe or Universes at all? And why should the mechanism generating life into these equations have infused them with the physical laws that allow the production of any Universes at all?

Some quantum theorists speculate that we can create a Universe or Universes out of nothing, that a particle and an anti-particle, for example could in theory spontaneously be generated out of what is described as a ‘quantum vacuum’. According to this theoretical conjecture, the Universe ‘tunnelled’ into existence out of nothing.

This would be a helpful handle for proposing some rational explanation of the origin of the Universe and of space-time if a ‘quantum vacuum’ was in fact nothingness. But that’s the problem with this theoretical foray into the quantum world. In fact, a quantum vacuum is not empty or nothing in any real sense at all. It has a complex mathematical structure, it is saturated with energy fields and virtual-particle activity. In other words, it is a thing with structure and things happening in it. As such, the equations that would form the quantum basis for generating particles, anti-particles, fluctuations, a Universe or Universes, actually exist, possess structure. They are not nothingness, not a void.

To be more specific, according to relativistic quantum field theories, particles can be understood as specific arrangements of quantum fields. So one particular arrangement could correspond to there being 28 particles, another 240, another to no particles at all, and another to an infinite number. The arrangement which corresponds to no particles is known as a ‘vacuum’ state. But these relativistic quantum field theoretical vacuum states are indeed particular arrangement of elementary physical stuff, no less than so than our planet or solar system. The only case in which there would be no physical stuff would be if the quantum fields ceased to exist. But that’s the thing. They do exist. There is no something from nothing. And this something, and the equations which infuse it, has somehow had the shape and form to give rise to protons, neutrons, planets, galaxies and us.

So the question is what gives life to this structure, because without that structure, no amount of ‘quantum fiddling’ can create anything. No amount of something can be produced out of nothing. Yes, even empty space is something with structure and potential. More basically, why should such a thing as a ‘quantum vacuum’ even have existed, let alone be infused with the potential to create a Universe or Universes and conscious life out of non-conscious somethingness? Whatever the reason, whatever existed before the Big Bang was something real, albeit immaterial in the conventional sense of the word. Put another way, we need to ask as a start where the laws of quantum mechanics come from, or why the particular kinds of fields that exist do so, or why there should be any fields at all? These fields, these laws did exist and do exist, but from whence were they breathed?

So there was something before the Big Bang, or even Big Bangs, something which possessed structure, something which gave life to the equations which allowed a Universe to exist, and all life within it. And this something has produced a very fine tune, which has produced human consciousness, and the ability to ask the ultimate question. To ask the question ‘Why?’ To ask who or what produced a Universe in which we can pose this ultimate question, and to what end, if any? In other words, who or what composed this very fine tune? And why?

It’s a big question, I realise that, but it’s a very important question.

And it’s a question to which we should all be very interested in finding the answer.

Is there a solution to the St. Petersburg paradox?

It was a puzzle first posed by the gifted Swiss mathematician, Nicolas Bernoulli, in a letter to Pierre Raymond de Montmort, on Sept. 9, 1713, and published in the Commentaries of the Imperial Academy of Science of St. Petersburg. Mercifully it is simple to state. Less mercifully, it is a nightmare to solve. To state the paradox, imagine tossing a coin until it lands heads-up, and suppose that the payoff grows exponentially according to the number of tosses you make. If the coin lands heads-up on the first toss, then the payoff is £2; if it lands heads-up on the second toss, the payoff is £4; if it takes three tosses, the payoff is £8; and so forth, ad infinitum. Now the odds of the game ending on the first toss is ½; of it ending on the second toss is (1/2)^2 = ¼; on the third, (1/2)^3 = 1/8, etc., so your expected win from playing the game = (1/2 x £2) + (1/4 x £4 + 1/8 x £8) + …, i.e. £1 + £1 + £1 … = infinity. It follows that you should be willing to pay any finite amount for the privilege of playing this game. Yet it seems irrational to pay very much at all. So what is the solution? There have been very many attempts at a solution over the years, some more satisfying than others, but none totally so. For the best attempt to date, I think we should go back to 1923, and the classic Moritz explanation, offered by R.E. Moritz, writing in the American Mathematical Monthly. “The mathematical expectation of one chance out of a thousand to secure a billion dollars is a million dollars, but this does not mean that anyone in his senses would pay a million dollars for a single chance of winning a billion dollars”. Of the more recent attempts, I like best that offered by Benjamin Hayden and Michael Platt, in 2009. “Subjects … evaluate [the gamble] … as if they were taking the median rather than the mean of the payoff distribution … [so] this classic paradox has a straightforward explanation rooted in the use of a statistical heuristic.” Surprisingly, though, there is still no real consensus on the solution to this puzzler of more than three hundred years vintage. If and when we do finally solve it, we will have made a giant step toward establishing a more complete and precise understanding of the meaning of rationality and the working of the human economic mind. Care to try?   References Moritz, R. E. (1923). Some curious fallacies in the study of probability. The American Mathematical Monthly, 30, 58–65. Hayden B. and Platt, M. (2009). The mean, the median, and the St. Petersburg Paradox. Judgment and Decision Making, 4, no. 4, June, 256-272. Link:

Jeb Bush is running for President. Will he win?

Now that Jeb Bush has officially announced he is seeking the nomination of the Republican party as its candidate for president of the United States, it seems a good time to ask how likely it is that he will actually become the 45th US President. How can we best answer this? A big clue can be found in a famous study of the history of political betting markets in the US, which shows that of the US presidential elections between 1868 and 1940, in only one year, 1916, did the candidate favoured in the betting end up losing, when Woodrow Wilson came from behind to upset Republican Charles E. Hughes in a very contest. Even then, they were tied in the betting by the close of polling. The power of the betting markets to assimilate the collective knowledge and wisdom of those willing to back their judgement with money has only increased in recent years as the volume of money wagered has risen dramatically, the betting exchanges alone seeing tens of millions of pounds trading on a single election. In 2004, a leading betting exchange actually hit the jackpot when its market favourite won every single state in that year’s election. The power of the markets has been repeated in every presidential election since. For example, in 2008, the polls had shown both John McCain and Barack Obama leading at different times during the campaign, while the betting markets always had Obama as firm favourite. Indeed, on polling day, he was as short as 20 to 1 on to win with the betting exchanges, while some polling still had it well within the margin of error. In the event, Obama won by a clear 7.2%. By 365 Electoral College Votes to 173. In 2012, Barack Obama led Mitt Romney by just 0.7% in the national polling average on election day, with major pollsters Gallup and Rasmussen showing Romney ahead. British bookmakers were quoting the president at 5 to 1 on (£5 to win £1). Indeed, Forbes reflected the view of most informed observers, declaring that: With one day to go before the election, we’re becoming super-saturated with poll data predicting a squeaker in the race for president. Meanwhile, bookmakers and gamblers are increasingly certain Obama will hang on to the White House. He went on to win by 3.9% and by 332 Electoral College Votes to 206. What is happening here is that the market is tapping into the collective wisdom of myriad minds who feed in the best information and analysis they can because their own financial rewards depend directly upon this. As such, it is a case of “follow the money” because those who know the most, and are best able to process the available information, tend to bet the most. Moreover, the lower the transaction costs (in the UK the betting public do not pay tax on their bets) and information costs (in never more plentiful supply due to the Internet) the more efficient we might expect betting markets to become in translating information today into forecasts of tomorrow. For these reasons, modern betting markets are likely to provide better forecasts than they have done ever before. In this sense, the market is like one mind that combines the collective wisdom of everybody. So what does this brave new world of prediction markets tell us about the likely Republican nominee in 2016? Last time, they were telling us all along that it would be Mitt Romney. This time the high-tech crystal ball offered by the markets is seeing not one face, but three, and yes – Jeb Bush is one of them. But two other faces loom large alongside the latest incarnation of the Bush dynasty. One is Florida senator, Marco Rubio, and the other is the governor of Wisconsin, Scott Walker. According to the current odds, at least, it is very likely that one of these men will be the Republican nominee. According to the betting, Bush will struggle to win the influential Iowa caucus, which marks the start of the presidential election season. The arch conservative voters there are expected to go for the man from Wisconsin. New Hampshire, the first primary proper, is likely to be closer. Essentially, though, this will be a contest between the deep pockets and connections of the Bush machine, the deep appeal of Scott Walker tot the “severely conservative” (a phrase famously coined by Mitt Romney), and the appeal of Marco Rubio to those looking for solid conservative credentials matched with relative youth and charisma. By the time the race is run, the betting markets currently indicate that Bush is the name that is most likely (though by no means sure) to emerge. Rubio is likely to push him hardest – and it could be close. At current odds, though, Bush does have the best chance of all the candidates in the field of denying the Democrats, and presumably Hillary Clinton the White House. But whoever is nominated by the Republican Party, it is the Democrats who are still firm favourites to retain the keys to Washington DC’s most prestigious address.   Note A version of this blog, with links to sources, first appeared in The Conversation UK on June 15, 2015.

Why did the Tories win and what are the lessons for Labour?

Twitter: @leightonvw

Why did the Conservatives win an overall (albeit narrow) majority in the 2015 UK Election, and almost a hundred more seats than Labour? Numerous hypotheses have been put forward, often centred around the ideas of leadership, economic competence and attracting the ‘aspirational’ voters of ‘middle England.’ If this analysis is correct, it tells Labour something very important about the ground their next leader will need to fight on, and indeed who that leader should be. But is this the whole picture?

This is where the opinion polls can in fact tell us something important. There has certainly been much discussion since the election results were declared about weaknesses in their survey design, but in itself that does not seem sufficient to explain the huge disparity in what actually happened at the polling stations (Tories ahead of Labour by 6.5%) and what happened in the polls (essentially tied). I argue here that a big part of the reason for this disparity is what I term the ‘lethargic Labour‘ effect, i.e. the differential tendency of Labour supporters to stay at home compared to Tory supporters. ‘Lethargic’ is a term I choose carefully for its association with apathy and general passivity, and it is a factor which I believe has huge implications for political strategy in the years ahead.

To understand this, it is instructive to look to the exit poll, which was conducted at polling stations with people who had actually voted. This was much more accurate than the other polls, including those conducted during Election Day over the telephone or online, and showed a much lower Labour share of the vote. A dominant explanation for this disparity is that there was a significant difference in the number of those who declared they had voted Labour or that they would vote Labour and those who actually did vote.

This ‘lethargic Labour’ effect is quite different to the so-called ‘shy Tory’ effect which was advanced as part of the explanation for the polling meltdown of 1992, when the Conservatives in that year’s General Election were similarly under-estimated in the opinion polls. This ‘shy Tory’ effect is the idea that Tories in particular were shy of revealing their voting intention to pollsters. Yet in 2015 we would expect, if this were a real effect, to have seen it displayed in under-performance by the Tories in telephone polls compared to the relatively more anonymous setting of online polls. There is no such evidence, if anything the reverse being the case for much of the polling cycle.

I am not proposing that the idea of ‘lethargic Labour’ supporters offers the whole explanation for the Tory victory. There is also a historically well-established late swing to incumbents, which cannot be blamed on the raw polls, but is sometimes built into poll-based forecasting models which can account for some of the differential, and there is additionally late tactical switching to consider, where an elector, when face to face with an actual ballot paper, casts a vote to hinder a least preferred candidate.

Interestingly, the betting markets significantly out-performed the polls and also sophisticated modelling based on those polls which allowed for late swing, but they beat the latter somewhat less comprehensively, at least at constituency level. At national aggregated level, the betting markets beat both very convincingly, though the swing-adjusted polls performed rather better than the published polls.

So what does this tell us? It suggests that there was indeed a late swing to the Tories, as well as probably a late tactical swing, both of which were picked up in the betting markets in advance of the actual poll. But the scale of the victory (at least compared to general expectations) was not fully anticipated by any established forecasting methodology. This suggests that there was an extra variable, which was not properly factored in by any forecasting methodology. This extra variable, I suggest, is the ‘lethargic Labour’ supporters, who existed in far greater numbers than was generally supposed.

To the extent that this explanation of the Tory majority prevails, it has profound implications for the strategy of the Labour Party over the next few years in seeking to win office.

It tells us that if Labour are to win the next election, a strategy will have to be devised which motivates their own supporters to actually turn out and vote. In other words, a strategy must be devised which attracts these ‘lapsed Labour’ voters, as I term them, into active Labour voters, which inspires the faithful to get out of their armchairs and into the polling pews. If they can’t construct an effective strategy to do that, it doesn’t really matter how effective their leader is, how economically competent they are seen to be, how well they appeal to the ‘aspirational’ voter. It is very unlikely that Labour will be able to win.

In summary, the Labour Party will need to motivate their more ‘lethargic’ supporters to actually show that support in the ballot box, will need to convert their supporters from being ‘lapsed’ voters into actual voters. If they can do that, the result of the next election is wide open.

Why were the UK election polls so spectacularly wrong?

If the opinion polls had proved accurate, we would have been woken up on the morning of May 8, 2015, to a House of Commons in which the Labour Party had quite a few more seats than the Conservatives, and by the end of the day the country would have had a new Prime Minister called Ed Miliband. This didn’t happen. Instead the Conservative Party was returned with almost 100 more seats than Labour and a narrow majority in the Commons. So what went wrong? Why did the polls get it so wrong?

This is not a new question. The polls were woefully inaccurate in the 1992 General Election, predicting a Labour victory, only for John Major’s Conservatives to win by a clear seven percentage points. While they had performed a bit better since, history repeated itself this year.

So what is the problem and can it be fixed? A big issue, I believe, is the methodology used. Pollsters simply do not make any effort to duplicate the real polling experience. Even as Election Day approaches, they very rarely identify to those whom they survey who the candidates are, instead simply prompting party labels. This tends to miss a lot of late tactical vote switching. Moreover, the filter they use to determine who will actually vote as opposed to say they will vote is clearly faulty, which can be seen if we compare the actual voter turnout figures with those projected in the polling numbers. Almost invariably, they over-estimate how many of those who say they will vote do actually vote. Finally, the raw polls do not make allowance for what we can learn from past experience as to what happens when people actually make the cross on the ballot paper compared to their stated intention. We know that there tends to be a late swing to the incumbents in the privacy of the polling booth. For this reason, it is wise to adjust the raw polls for this late swing.

Of all these factors, which was the main cause of the polling meltdown? For the answer, I think we need just look to the exit poll, which was conducted at polling stations with people who had actually voted. This exit poll, as in 2010, was quite accurate, while similar exit-style polls conducted during polling day over the telephone or online with those who declared they had voted or were going to vote failed pretty much as spectacularly as the other final polls. The explanation for this difference can, I believe, be traced to the significant difference in the number of those who declare they have voted or that they will vote and those who actually do vote. If this difference works particularly to the detriment of one party compared to another, then that party will under-perform in the actual vote tally relative to the voting intentions declared on the telephone or online. In this case, it seems a very reasonable hypothesis that rather more of those who declared they were voting Labour failed to actually turn up at the polling station than was the case with declared Conservatives. Add to that late tactical switching and the well-established late swing in the polling booth to incumbents and we have, I believe, a large part of the answer.

Interestingly, those who invested their own money in forecasting the outcome performed a lot better in predicting what would happen than did the pollsters. The betting markets had the Conservatives well ahead in the number of seats they would win right through the campaign and were unmoved in this belief throughout. Polls went up, polls went down, but the betting markets had made their mind up. The Tories, they were convinced, were going to win significantly more seats than Labour.

I have interrogated huge data sets of polls and betting markets over many, many elections stretching back years and this is part of a well-established pattern. Basically, when the polls tell you one thing, and the betting markets tell you another, follow the money. Even if the markets do not get it spot on every time, they will usually get it a lot closer than the polls.

So what can we learn going forward? If we want to predict the outcome of the next election, the first thing we need to do is to accept the weaknesses in the current methodologies of the pollsters, and seek to correct them, even if it proves a bit more costly. With a limited budget, it is better to produce fewer polls of higher quality than a plethora of polls of lower quality. Then adjust for known biases. Or else, just look at what the betting is saying. It’s been getting it right since 1868, before polls were even invented, and continues to do a pretty good job.



Get every new post delivered to your Inbox.

Join 31 other followers