Skip to content

Beware the Ides of March! US Election Special

The Ides of March, or March 15, has long been associated with doom and destruction. In 44BC, confident populist Julius Caesar ignored a soothsayer’s warning and met his demise at the height of his adulation by an adoring public. It was also the day that Czar Nicholas II in 1917 formally abdicated his throne, and the day that Germany occupied Czechoslovakia in 1939. And now it’s the turn of the Republican Party.

This year’s Ides of March could prove pivotal for the US presidential race, as the primaries roll into five big states: Florida, Ohio, Illinois, North Carolina and Missouri. With firebrand insurgent Donald Trump still denying all the Republicans’ attempts to stop him, the day’s massive delegate haul threatens to put him firmly on the path to the nomination.

Much will depend on what happens in Florida and Ohio, the home states of Florida Senator Marco Rubio and the Governor of Ohio,  John Kasich. Kasich has pledged to withdraw from the contest if he loses Ohio, while Rubio has himself said that whoever wins Florida will be the nominee of the Republican Party. If he falls behind, he will be under enormous pressure to bow out.

This confronts Trump’s conservative rival, Ted Cruz, with a fiendish dilemma. He’s won a fair number of states, but to have a decent chance at winning the nomination, Cruz needs Kasich and especially Rubio to drop out. So Cruz wants them to do poorly. But if either or both lose their home state, it’s Trump, not Cruz, who’s most likely to grab their delegates – a hefty 99 in Florida and a chunky 66 in Ohio, all allocated on a winner-take-all basis.

On the other hand, if Rubio somehow rallies to win Florida, he’s very likely to stay in, as is Kasich if he wins Ohio. This puts Cruz and other anti-Trump forces in the awkward position of needing Rubio and Kasich both to trump Trump and to fall short.

The best outcome Cruz can hope for is for Rubio and Kasich to do just enough to win Florida and Ohio respectively, therefore denying Trump the winner-take-all delegates, but to do so badly elsewhere that they drop out anyway. Not impossible, but unlikely.

So where does that leave us?

Splitting the difference

Trump just needs to seize Ohio and Florida to put him in touching distance of the prize, but that’s a big task, especially in Ohio. Illinois and Missouri offer a combined total of 121 delegates. North Carolina’s 72 delegates are in play as well, but those are allocated on a proportional basis, so grabbing the gold isn’t quite as important there.

So if Trump picks up Florida, Ohio and does well in Illinois and/or Missouri, the fight for the Republican nomination could be all but over by Wednesday morning. But that outcome is far from pre-ordained.

Let’s say Trump loses either Ohio or (less likely) Florida, but not both. That puts his chance of clinching a majority of delegates before the convention in jeopardy, maybe Illinois and/or Missouri tipping the scale. But if he loses both Ohio and Florida, he’s extremely unlikely to win a majority of the delegates before the convention in July.

If that’s the case, anything could happen. If it is ultimately not possible to construct a winning coalition of delegates around any of the current four horsemen of the Republican Party’s political apocalypse, the party could even turn outwards, to anoint a different saviour. This would presumably be someone undamaged by the internecine warfare that would have brought the party to that impasse. That would now seem to rule out Mitt Romney, given his recent full-on personal attacks upon Donald Trump. Instead they are more likely to look to a unifier, though they would need to change the convention rules to do so.

They have called upon someone fresh in dire straits before. At the end of 2015, the party could find nobody to replace John Boehner when he suddenly stood down as speaker of the House of Representatives. Then they found someone who at first said he wasn’t interested, but later relented: Paul Ryan, Mitt Romney’s running mate in 2012.

Is this a likely outcome? Not at all. While chatter around a possible Ryan candidacy suddenly spiked as March 15 loomed, a fundraising group formed to “draft” him recently shut down after his aides disavowed its work.

It’s far more likely that Trump will emerge as the Republican nominee, followed by Cruz, then Kasich and Rubio. But if no one can garner a majority of delegates to win the first ballot at the convention, any number of scenarios could play out.

As the betting markets currently see things, by far the most electable against a Democratic opponent in the general election are John Kasich and Marco Rubio. Of these two, Kasich is rated by the markets as much more likely to win the nomination. If he scrapes a win in the Ohio primary and finally starts winning delegates, might he somehow emerge from the pack at a contested convention, perhaps with Rubio or even Cruz in tow as his running mate? We shall see.

Further Reading and Links

https://selectnetworks.net

Super Tuesday is today. Will it be the day that Trump effectively clinches the Republican nomination?

In the wake of Donald Trump’s blowout victory amid the bright lights of the Las Vegas strip, the money has been piling on the billionaire businessman from New York to sweep all aside on the way to a coronation at the Republican convention. But just how smart is this money?
After all, Trump was also favourite to win the Iowa caucuses, not only in the betting markets but also in the polls and the pundits’ conventional wisdom. In the event, he lost Iowa to Ted Cruz, the arch-conservative senator from Texas.
He also only narrowly bested Senator Marco Rubio of Florida, who was suddenly being talked about as the main contender for the nomination – heralding a volatile period that sent the markets haywire.
It seems absurd that Rubio could vault past Trump after placing behind him, but that’s down to a little thing called the expectations game. If you can deflate expectations and come third, that’s somehow seen as better than attracting high expectations and coming second. It may not do much for you in the Olympics, but it matters a lot in politics.
It worked for Rubio until a fateful pre-New Hampshire debate encounter with New Jersey governor, Chris Christie. In a calamity that’s been compared to a scene from The Stepford Wives where a suburban woman suffers a circuitry meltdown, revealing her to be a robot, Rubio haplessly began repeating the same scripted attack on Barack Obama over and over again.
He promptly plummeted in the betting markets’ estimation, and eventually finished a poor fifth in New Hampshire. In his concession speech, he admitted that he had malfunctioned during the debate, but promised that systems were now restored and that no further meltdown would occur.
He seems to have been true to his word, but the drubbing Trump dealt him in the subsequent South Carolina primary suggested that the damage had already been done. Even so, Rubio’s stock had shown remarkable resilience in the betting, vying pre-Nevada with Trump for the shortest odds in the nomination market.
Meanwhile, Cruz – who won Iowa against the odds – has lengthened to odds usually indicative of someone with no chance at all. Why so? After all, he only narrowly lost the runner-up slot in South Carolina to Rubio, and performed similarly in Nevada. He also has plenty of money on hand. Part of the reason is that he’s losing against Trump among his own base of arch-conservative religious evangelicals – and was unable to beat Rubio even in South Carolina, whose demographics should have made it prime Cruz territory.
The Cruz brand has also earned something of a reputation for dirty tricks. On the night of the Iowa caucuses his team spread rumours that rival candidate Ben Carson was bowing out of the race. Before South Carolina, the campaign clumsily photoshopped Rubio’s head onto someone else shaking hands with Obama at the White House. Most recently, they released a video in which subtitles over Rubio’s slightly garbled speech suggest he was mocking the Bible, when in fact he did the exact opposite.
So where can Cruz go? Home to Texas, where he’ll be hoping for a very strong showing when the state holds its primary on March 1. He’ll need to win big to change the betting markets’ mind: they put his odds of clinching the keys to the White House at as long as 150-to-1.
With Nevada out of the way and the Trump campaign in full swing, the markets see the future pretty clearly.
Trump is now trading at short odds on (2-to-7) to win the Republican nomination, while Rubio is currently available at a best price of about 5 to 1 against. That makes Trump better than a 3-in-4 favourite to become the Republican standard-bearer for the general election, with Rubio’s chances about 1-in-6 as we head into the latest phase of the campaign.
The odds have been shuffled on the Democratic side too. The “Bern” that Bernie Sanders was feeling after routing Hillary Clinton in New Hampshire was reduced to something of a fizzle in Nevada, where she beat him by a comfortable five points. Nevada’s caucuses resolve ties by means of a card draw. At one deadlocked caucus in the town of Pahrump, an ace was drawn for Clinton against a six for Sanders, an apt representation of the night he had.
Sanders now faces Clinton in South Carolina, where she’s the heavy favourite. She’s also now a best priced 1-to-20 to secure the nomination, as a slew of southern and western states expected to fall to her line up to vote in the next three weeks. She is currently the odds on favourite (4-to-7) to win the presidency, which gives her about a 63 per cent chance of going all the way. Meanwhile, Trump is trading at about 11-to-4 (a 27% chance) to win the big prize, and Rubio at 16 to 1. Of the rest, Bernie Sanders and Michael Bloomberg have more chance of winning the Presidency, as we head into the afternoon of Super Tuesday, than any of the other Republican candidates.
And so we wait for array of states large and small to make their decisions – and potentially scramble the odds once again. (4-to-5) to win the presidency.
And so they all march on to Super Tuesday, when an array of states large and small will make their decisions – and potentially scramble the odds once again.

The US election is suddenly getting very interesting!

Heading into the Iowa caucuses, all the main forecasting methodologies ranked Donald Trump and Hillary Clinton as favourites to win the first contests in their respective nomination battles. In the end, they were half right: while Clinton managed to squeak the narrowest of wins over Bernie Sanders, Trump came second behind Ted Cruz of Texas, with Florida’s Marco Rubio nipping at his heels.

The results were written up with varying degrees of surprise and shock. So who did the best job of predicting them?

Of the opinion polls, the Des Moines Register/Bloomberg Politics survey is generally regarded as the gold standard in terms of the Iowa state caucuses. In its last survey before voting began, it had Trump on the Republican side leading Ted Cruz by 28% to 23%, with Marco Rubio on 15%. For the Democrats, Clinton was leading Sanders by 45% to 42%. This survey proved wide of the mark and, in that regard, it was broadly consistent with other recent polling.

Then there’s the panel-of-experts model. One such group is the Political Caucus, a panel of strategists, operatives and activists. In its final survey, Republicans were split, but put Donald Trump in pole position, with Cruz second and Rubio third. Democratic insiders were less divided, coming out strongly in favour of a decisive Clinton victory. So, also wrong.

But there’s another way of forecasting that often proves to be much closer to the actual result: the betting and prediction markets. These can be observed in real time through the Oddschecker service (which lists a range of leading bookmaker prices) as well as by observing the prices on the person-to-person betting exchanges.

There are also dedicated “crowd wisdom” prediction markets such as Almanis, “wisdom of crowd” projects such as Predictwise, as well as real-money prediction markets such as PredictIt and the Iowa Electronic Markets.

On the eve of the caucuses, the real-money betting and prediction markets gave Clinton about a two-in-three chance of winning Iowa, and Trump just a little less. Rubio, it seemed, was trailing third by a fair margin. In the event, it was Cruz who surged to victory among the Republicans while Rubio’s third-place showing was unexpectedly strong. The Democratic race was labelled pretty much across the board as too close to call for most of the count, but the betting markets sided consistently with Clinton.

As soon as the actual results are declared, the betting markets adjust to incorporate the new information. So far they’ve already shaken up their thinking about who will become the Republican party’s nominee, but have barely flickered in regard to the Democratic choice.

Odds on

Going into the caucuses, of the main candidates the betting markets gave Trump a 50% chance of winning the Republican nomination, followed by Rubio on 32%, Cruz on 9% and Jeb Bush on 8%. The commensurate predictions for the Democratic nomination were 80% for Clinton and 19% for Sanders.

In terms of winning the White House, Clinton was firm favourite, with a 51% chance of winning the general election, followed by Trump on 19%, Rubio on 15%, Sanders on 6%, Cruz on 4% and Bush on 3%.

As Americans woke up after the count that map had changed significantly, at least on the Republican side. The new favourite to win the Republican nomination on the betting markets is Rubio, who emerged from the polling with a 53% chance of being the eventual nominee, followed by Trump on 26%, Cruz on 14% and Bush on 5%.

Despite the narrowness of the Democratic contest, the betting markets were unfazed, with Clinton clinging to her 80% chance of taking the nomination.

As to the map of probabilities for who will eventually win the White House, the markets still ranked Clinton as the firm favourite for the presidency, just as strong as before Iowa, and Rubio was now her closest challenger: the markets in the immediate aftermath of the Iowa caucuses rated his chances of progressing all the way to victory in November at more than one in five, up from 15%. Trump’s chance slipped to about one in ten, down from 19%.

So the news of the night is that nothing had really changed on the Democratic front, while Rubio leapfrogged Trump as the most likely challenger to Clinton.

Then came that debate, in which the mask of the new Republican frontrunner seemed to slip, as he started to repeat the same phrase over and over again, like some sci-fi scene in which the human is finally revealed to be a cleverly designed android. Enter the Stepford Wives! Enter the Replicant!

The momentum which had grown from beating expectations, albeit by coming third, was suddenly reversed. With less than a day to polling in the New Hampshire primary, the newly designated ‘Marcobot’ still held on to frontrunner status in the betting markets, though not in the real-money prediction markets.

What happens in the Granite State will now be fascinating. It could even be decisive.

 

Can Boy Rubio win? You bet he can!

Heading into the Iowa caucuses, all the main forecasting methodologies ranked Donald Trump and Hillary Clinton as favourites to win the first contests in their respective nomination battles. In the end, they were half right, if only just): while Clinton managed to squeak the narrowest of wins over Bernie Sanders, Trump came second behind Ted Cruz of Texas, with Florida’s Marco Rubio nipping at his heels.

The results were written up with varying degrees of surprise and shock. So who did the best job of predicting them?

Of the opinion polls, the Des Moines Register/Bloomberg Politics survey is generally regarded as the gold standard in terms of the Iowa state caucuses. In its last survey before voting began, it had Trump on the Republican side leading Ted Cruz by 28% to 23%, with Marco Rubio on 15%. For the Democrats, Clinton was leading Sanders by 45% to 42%. This survey proved wide of the mark and, in that regard, it was broadly consistent with other recent polling.

Then there’s the panel-of-experts model. One such group is the Politico Caucus, a panel of strategists, operatives and activists. In its final survey, Republicans were split, but put Donald Trump in pole position, with Cruz second and Rubio third. Democratic insiders were less divided, coming out strongly in favour of a decisive Clinton victory. So, also wrong.

But there’s another way of forecasting that often proves to be much closer to the actual result: the betting and prediction markets. These can be observed in real time through the Oddschecker service (which lists a range of leading bookmaker prices) as well as by observing the prices on the person-to-person betting exchanges.

There are also dedicated “crowd wisdom” prediction markets such as Almanis, “wisdom of crowd” projects such as Predictwise, as well as real-money prediction markets such as PredictIt and the Iowa Electronic Markets.

On the eve of the caucuses, the real-money betting and prediction markets gave Clinton about a two-in-three chance of winning Iowa, and Trump just a little less. Rubio, it seemed, was trailing third by a fair margin. In the event, it was Cruz who surged to victory among the Republicans while Rubio’s third-place showing was unexpectedly strong. The Democratic race was labelled pretty much across the board as too close to call for most of the count, but the betting markets sided consistently with Clinton.

As soon as the actual results are declared, the betting markets adjust to incorporate the new information. So far they’ve already shaken up their thinking about who will become the Republican party’s nominee, but have barely flickered in regard to the Democratic choice.

Odds on

Going into the caucuses, of the main candidates the betting markets gave Trump a 50% chance of winning the Republican nomination, followed by Rubio on 32%, Cruz on 9% and Jeb Bush on 8%. The commensurate predictions for the Democratic nomination were 80% for Clinton and 19% for Sanders.
In terms of winning the White House, Clinton was firm favourite, with a 51% chance of winning the general election, followed by Trump on 19%, Rubio on 15%, Sanders on 6%, Cruz on 4% and Bush on 3%.

As Americans woke up after the count that map had changed significantly, at least on the Republican side. The new favourite to win the Republican nomination on the betting markets is Rubio, who emerged from the polling with a 53% chance of being the eventual nominee, followed by Trump on 26%, Cruz on 14% and Bush on 5%.

Despite the narrowness of the Democratic contest, the betting markets were unfazed, with Clinton clinging to her 80% chance of taking the nomination.

As to the map of probabilities for who will eventually win the White House, the markets still rank Clinton as the firm favourite for the presidency, just as strong as before Iowa, and Rubio is now her closest challenger: the markets now rate his chances of progressing all the way to victory in November at more than one in five, up from 15%. Trump’s chance has slipped to about one in ten, down from 19%.

So the news of the night is that nothing has really changed on the Democratic front, while Rubio has leapfrogged Trump as the most likely challenger to Clinton. Next comes the New Hampshire primary – and the outcome of that might turn out to be much more significant.

US Elections Forecasting Project 2016

Follow: @leightonvw

The 2016 US Elections Forecasting Project, headed by Professor Leighton Vaughan Williams, Director of the Political Forecasting Unit and the Betting Research Unit at Nottingham Business School, will be updated regularly thought the Primary season and up to and including General Election Day, November 8, 2016.

Monday, January 18, 2016 (Saturday, January 16 in brackets)

Iowa caucuses, February 1, 2016, Democrats

Clinton v. Sanders only.

Political Forecasting Unit projection:

Clinton 61% (63%)

Sanders 39% (37%)

FiveThirtyEight polls-only forecast (based on state polls):

Chances of winning:

Clinton 65 % (66%)

Sanders 35% (34%)

FiveThirtyEight polls-plus forecast (state and national polls, plus endorsements):

Clinton 81% (82%)

Sanders 19% (18%)

PredictIt implied probabilities:

Clinton 54% (52%)

Sanders 46% (48%)

Betfair implied probabilities:

Clinton 55.9% (59.4%)

Sanders 44.1% (40.6%)

Oddschecker implied probabilities:

Clinton 62.5% (62.5%)

Sanders 37.5% (37.5%)

Predictwise forecast

Clinton – 59% (61.5%)

Sanders – 41% (38.5%)

New Hampshire Primary, February 9, 2016 – Democrats

Clinton v. Sanders only

Political Forecasting Unit projection:

Clinton 32% (35%)

Sanders 68% (65%)

FiveThirtyEight polls-only forecast (based on state polls):

Chances of winning:

Clinton 29% (28%)

Sanders 71% (72%)

FiveThirtyEight polls-plus forecast (state and national polls, plus endorsements):

Clinton 57% (57%)

Sanders 43% (43%)

PredictIt implied probabilities:

Clinton 29% (27%)

Sanders 71% (73%)

Betfair implied probabilities:

Clinton 36.6% (38.8%)

Sanders 63.4% (61.2%)

Oddschecker implied probabilities:

Clinton 33.5% (34.8%)

Sanders 66.5% (65.2%)

Predictwise forecast:

Clinton – 29% (37%)

Sanders – 71% (63%)

Iowa Caucuses, February 1, 2016 – Republicans

Cruz, Rubio, Trump only

Political Forecasting Unit projection:

Cruz 58% (57%)

Trump 34%  (34%)

Rubio 6% (7%)

FiveThirtyEight polls-only forecast (based on state polls):

Chances of winning:

Cruz 43% (44%)

Trump 42% (42%)

Rubio 8% (8%)

FiveThirtyEight polls-plus forecast (state and national polls, plus endorsements):

Cruz 51% (51%)

Trump 28% (29%)

Rubio 13% (17%)

PredictIt implied probabilities:

Cruz 57% (54%)

Trump 40% (39%)

Rubio 4% (7%)

Betfair implied probabilities:

Cruz 64.5% (61.7%)

Trump 33.8% (32.5%)

Rubio 5.0% (5.8%)

Oddschecker implied probabilities:

Cruz 61.0% (62.3%)

Trump 33.4% (30.2%)

Rubio 5.6% (7.5%)

Predictwise forecast:

Cruz 62% (63%)

Trump 31% (33%)

Rubio 3% (4%)

New Hampshire Primary, February 9, 2016 – Republicans

Trump, Rubio, Cruz, Christie, Kasich, Bush only

Political Forecasting Unit projection:

Trump 57.5% (55.5%)

Rubio 15.4% (16.1%)

Cruz 9.9% (10.1%)

Christie 7.4% (7.2%)

Kasich 5.4% (6.8%)

Bush 4.4% (4.3%)

FiveThirtyEight polls-only forecast (based on state polls):

Chances of winning:

Trump 56% (57%)

Rubio 12% (12%)

Cruz 9% (9%)

Christie 7% (7%)

Kasich 7% (8%)

Bush 5% (5%)

FiveThirtyEight polls-plus forecast (state and national polls, plus endorsements):

Trump 39% (39%)

Rubio 19% (19%)

Cruz 14% (14%)

Christie 7% (7%)

Kasich 10% (10%)

Bush 8% (9%)

PredictIt implied probabilities:

Trump 69% (65%)

Rubio 11% (12%)

Cruz 9% (9%)

Christie 3% (5%)

Kasich 5% (5%)

Bush 3% (4%)

Betfair implied probabilities:

Trump 59.5% (53.1%)

Rubio 15.1% (16.0%)

Cruz 9.7% (10.1%)

Christie 6.9% (7.9%)

Kasich 4.5% (8.3%)

Bush 4.3% (4.6%)

Oddschecker implied probabilities:

Trump 55.6% (57.6%)

Rubio 17.5% (15.5%)

Cruz 9.7% (8.5%)

Christie 8.7% (10.3%)

Kasich 5.2% (4.7%)

Bush 3.3% (3.4%)

Predictwise forecast:

Trump 59% (51%)

Rubio 17% (13%)

Cruz 9% (11%)

Christie 6% (7%)

Kasich 3% (5%)

Bush 6% (3%)

2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 8,600 times in 2015. If it were a concert at Sydney Opera House, it would take about 3 sold-out performances for that many people to see it.

Click here to see the complete report.

Can we predict the Nobel Prize winners? You can bet on it.

When the Belarusian writer Svetlana Alexievich won this year’s Nobel Prize for Literature, it was not unexpected. She was not only the clear favourite with the bookmakers but had traded as one of the leaders in the betting in the previous two years.

While firms lay odds on the literature and peace prizes, there are no betting lines available for the Nobel Prizes in physics, chemistry and medicine. Instead, there is an organised platform which seeks to predict winners based on research citations.

Betting: from Hollywood to the Vatican

This has been a very good year for favourites in awards contests. The favourite in the betting won almost every single one of the 24 Oscar categories at this year’s Academy Awards. This domination of the favourites has been documented in politics for nearly 150 years, ever since hot favourite Ulysses S Grant strolled to the US presidency in 1868. The favourite in the betting has won almost every single presidential election held since.

But the Nobel Prize deliberations are quite different from a political election or even a Hollywood awards ceremony. Instead, they are a little more like a papal conclave, where the deliberations are secretive and there is no defined shortlist of nominees. Betting on papal conclaves has been formally recorded from as early as 1503. In that year, the brokers in the Roman banking houses who offered odds on who would be elected Pope made Cardinal Francesco Piccolomini the clear favourite. It was no surprise, therefore, when he went on to become Pope Pius III.

Since then the betting markets have had a mixed record of success in predicting the winner. For example, Cardinal Ratzinger was a warm favourite to be elected pope in 2005, and duly became Pope Benedict. The election of Cardinal Bergoglio as Pope Francis, on the other hand, came as more of a surprise to the markets.

Betting on processes that take place behind closed doors also happens outside the church. In 2009, crowdsourced fantasy league (or “prediction market”) FantasySCOTUS.net launched an attempt to peer behind the doors of the US Supreme Court, predicting its deliberations – a market still going strong today. The Supreme Court might be particularly suitable for a prediction market, in that not only is there a relatively small number of decision makers, but the universe of possible outcomes is also very limited. Predicting the Nobel Prize announcements might be expected to be somewhat more difficult.

So how do the betting companies compile their odds when it comes to the Nobels? Ladbrokes has said that, in the absence of information, the best way is consulting literary contacts and following relevant online discussions. This is despite the fact that it only takes about ₤50,000 in bets on the Nobel in literature, compared with a couple of million for a big football match.

Patchy record

How well have the markets performed to date? For the Sveriges Riksbank Prize in Economic Sciences, established in 1968 by Sweden’s central bank and considered an unofficial “Nobel”, the most ironic failure of a sort came in 2009, when the betting market offered by Ladbrokes had Eugene Fama, a pioneering exponent of the theory of efficient markets, as the solid 2 to 1 favourite. Assuming the market was truly efficient in respect of all relevant information, we might have expected him to be well up there among the top contenders. But the prize was shared by Elinor Ostrom and Oliver Williamson, both of whom were trading as 50 to 1 longshots before the announcement. Fama did go on to share the Nobel Prize four years later.

On the other hand, Harvard University had already set up its own dedicated economics prize prediction market, which did much better than Ladbrokes by making Oliver Williamson one of the favourites. In 2010, Peter Diamond shared the prize after having been listed as one of the favourites by Harvard.

Of the others in the top eight in 2010, Jean Tirole went on to win in 2014, Robert Shiller and Lars Peter Hansen in 2013. Thomas Sargent and Christopher Sims, who shared the 2011 prize, were among the favourites in the 2008 Harvard prediction market, which has since closed down.

Most of the market based predictions, however, focus on the Nobel Prizes for Literature and Peace. In 2014, French writer Patrick Modiano won the Literature Prize. Before the announcement, Modiano was trading as a reasonably well fancied joint fourth favourite. The previous year, Canadian Alice Munro was heavily backed into second favourite before claiming the prize. In 2011, Tomas Transtomer won the Literature Prize having been clear favourite in 2010.

The peace prize, which is awarded by a committee of five people who are chosen by the parliament of Norway, is slightly more complicated as awards are sometimes given to organisations rather than individuals. This also makes it less satisfying for potential market players. Still, the 2014 Nobel Peace prize was shared by Malala Yousafzai and Kailash Satyarthi. Malala had actually been backed to win in the previous year.
The physics, chemistry and medicine prizes, on the other hand, have not really attracted market attention to date, probably because it is too niche for the regular player. Instead this role has been taken up by Thomson Reuters, which claims to have identified 37 Nobel Prize winners since 2002, on the basis of an analysis of scientific research citations within the Web of Science. As an interesting development, Thomson Reuters has also now established a People’s Choice Poll, more akin to the “wisdom of crowds” methodology of a prediction market. Scientific society Sigma Xi has a prediction contest that enables people to vote for their favourite.

2015 Nobels: the verdict

This outline of the past few years is pretty much par for the course in the history of Nobel predictions. Far from perfect, but not at all unimpressive. Interestingly, the market is often a better predictor of future Nobel laureates than for that particular year.

This year, although the market got the Literature Prize spot on, it had not predicted that the Tunisian National Dialogue Quartet would win the peace prize. So well done to those who placed a bet on “none of the above”. It was trading as close second favourite to Angela Merkel on the PredictIt prediction market before the announcement.

Thomson Reuters got the 2015 physics, chemistry and medicine prizes wrong. This year it also highlighted Richard Blundell, John List and Charles Manski as the leading candidates for the economics prize, making special note of the former, who also won their People’s Choice poll. There was no organised betting on economics this year. This year’s economics Nobel actually went to Angus Deaton. Deaton, currently Dwight D. Eisenhower Professor of Economics and International Affairs at Princeton (formerly of Cambridge and Bristol universities) for his analysis of consumption, poverty and welfare.

So what will the prediction industry look like in ten years? On current trends, it will have grown up a lot. The science of forecasting and the power of prediction markets are currently growing apace. Will there ever come a time, I wonder, when we don’t need to wait for the announcement, but instead just look to the odds? Maybe we should set up a prediction market to answer that question.

Why Do Views Differ? Bad reasoning, bad evidence or bad faith?

Follow on Twitter: @leightonvw

When two parties to a discussion differ, it is useful, in seeking to resolve this, to determine from where the difference arises, and whether it is resolvable in principle.

The reason might be that the parties to the difference have access to different evidence, or else interpret that evidence differently. Another possibility is that one of the parties is applying a different or better process of reasoning to the other. Finally, differences might arise from each party adopting a different axiomatic starting point. So, for example, if two parties differ in a discussion on euthanasia or abortion, or even the minimum wage, with one party strongly in favour of one side of the issue and the other strongly opposed, it is critical to establish whether this difference is evidence-based, reason-based, or derived from axiomatic differences. We are assuming here that the stance adopted by each party on an issue is genuinely held, and is not part of a strategy designed to advance some other objective or interest.

The first thing is to establish whether any amount of evidence could in principle change the mind of an advocate of a position. If not, that leads us to ask where the viewpoint comes from. Is it purely reason-based, in which case (in the sense I use the term) it should in principle be either provable or demonstrably obvious to any rational person who holds a different view. Or else, is the viewpoint held axiomatically, so it is not refutable by appeal to reason or evidence? If the different viewpoints are held axiomatically by the parties to the difference, there the discussion should fall silent.

Before doing so, however, we should determine whether the conflict in beliefs between one person and another actually is axiomatic. The first question is to ask whether the beliefs are held by one or more parties as self-evident truths, or are they open to debate based on reason or evidence? In seeking to clarify this, it is instructive to determine whether the confidence of the parties to the difference could in principle or practice be shaken if others who might be considered equally qualified to hold a view on the matter disagree. In other words, when other people’s beliefs conflict with ours, does that in any way challenge the confidence we hold in these beliefs?

Let us pose this in a slightly different way. If two parties hold conflicting views or beliefs, and each of these parties has no good reason to believe that they are the person more likely to be right, does that at least make them doubt their view, even marginally? Does it perhaps give a reason to doubt that either party is right? The more closely the answer to these questions converges on the negative, the more likely it is that the divergent views or beliefs are held axiomatically.

If the differences are not held axiomatically, both parties should in principle be able to converge on agreement. So the question reduces to establishing whether the differences arise from divergences in reasoning, which should be resolvable in principle, or else by differences in access to evidence or proper evaluation of the evidence. Again, the latter should be resolvable in principle. In some cases, a viewpoint is held almost but not completely axiomatically. It is therefore in principle open to an appeal to evidence and/or reason. The bar may be set so high, though, that the viewpoint is in practice axiomatically held.

If only one side to the difference holds a view axiomatically, or as close as to make it indistinguishable in practical terms, then the views could in principle converge by appeal to reason and evidence, but only converge to one side, i.e. the side which is holding the view axiomatically. This leads to a situation in which it is in the interest of a party seeking to change the view of the other party to conceal that their viewpoint is held axiomatically, and to represent it as reason-based or evidence-based, but only where the other party is not known to also hold their divergent view axiomatically. This leads to a game-theoretic framework where the optimal strategy, in a case where both parties know that the other party holds a view axiomatically, is to depart the game.

In all other cases, the optimal strategy depends on how much each party knows about the drivers of the viewpoint of the other party, and the estimated marginal costs and benefits of continuing the game in an uncertain environment. It is critical in attempting to resolve such differences of viewpoint to determine whence they arise, therefore, in order to determine the next step. If they are irresolvable in principle, it is important to establish that at the outset. If they are resolvable in principle, setting this framework out at the beginning will help identify the cause of the differences, and thus help to resolve them. What applies to two parties is generalizable to any number, though the game-theoretic framework in any particular state of the game may be more complex.

In all cases, transparency in establishing whether each party’s viewpoint is axiomatically held, reason-based or evidence-based, is the welfare-superior environment, and should be aimed for by an independent facilitator at the start of the game. Addressing differences in this way helps also to distinguish whether views are being proposed out of conviction, or whether they are being advanced out of self-interest or as part of a strategy designed to achieve some other objective or interest.

So in seeking to derive a solution to the divergence of view or belief, we need to ascertain whether the differences are actually held axiomatically. To do this, we need to examine the source of the belief, and whether this can be dissected by appeal to reason or evidence.

To do so, we need to ask whether there are absolute ethical imperatives, which can be agreed upon by all reasonable people, or not. For example, is it reasonable to agree that people should not be treated merely as means but as ends in themselves?

Of course, people might out of self-interest choose to treat others as means, disregarding their status as human beings of equal worth, but this is different to holding that to be ethically true. The appeal to a Rawlsian ‘veil of ignorance’ argument helps here, where each person must choose whether to hold their ethical framework without knowing in advance who or where they are, poor or rich, able-bodied or disabled, male or female, free or slave.

In this context, we can ask whether there are ethical principles, views, or beliefs, that all reasonable people could agree with, or that all people could not reasonably reject.

Are there such?

Take, for example, this idea that people should be treated as means, not ends. What if we harm someone in order to prevent a greater harm to someone else? If we do all we can do to minimize harm to them in so doing, are we treating them merely as means? Suppose, for example, you need to jam someone’s foot in a mechanism, so crushing it, in order to save another person from being killed by the mechanism. You are certainly using the other person as means to an end, but if you deliberatively did the least harm to the person commensurate with saving the other person, is that really treating the person as a means, given that you paid full attention to the harmed person’s well-being in saving the life of the other person?

Looked at another way, does treating people as ends and not means imply that their interests should not be sacrificed even if that creates greater overall good? In other words, is it sufficient to take each person’s interests into account or must they be fundamentally and absolutely protected? Does everyone, in other words, have an innate right to freedom, which includes independence from being constrained by another’s choice? Do we have absolute duties we owe each other arising from our equal status as persons? To what extent, in other words, are the rights and freedoms of people absolute as opposed to instrumental, and is it possible to formalize an ethical code which all reasonable people could assent to, or not reasonably reject, around this? If not, axiomatic differences remain possible. To the extent that we can, less so.

More generally, are there absolute ethical principles which hold regardless of consequences, regardless of how much benefit or harm accrues from acting upon them? This reduces to whether we conceive of morality as grounded in relations between people and the duties we owe each other, or whether it is about the relations of people to states of affairs, such as the maximizing of overall well-being, however defined. Such a definition could include happiness, knowledge, creativity, love, friendship, or else simply realisation of personal preferences or desires.

So, to summarize, I suggest that views and beliefs can be usefully classified into fundamental (or axiomatic) ethical imperatives and ethical imperatives based on reason and evidence. While reason-based and evidence-based ethical imperatives can, of course, be influenced by evidence and reason, fundamental ethical imperatives cannot.

In considering ethical imperatives which are duty-based, grounded in moral duties owed by all people to each other, as opposed to the effect upon general well-being, we can benefit from the use of a Bayesian framework.

Bayes’ theorem concerns how we formulate beliefs about the world when we encounter new data or information. The original presentation of Rev. Thomas Bayes’ work, ‘An Essay toward Solving a Problem in the Doctrine of Chances’, gave the example of a person who emerges into the world and sees the sun rise for the first time. At first, he does not know whether this is typical or unusual, or even a one-off event. However, each day that he sees the sun rise again, his confidence increases that it is a permanent feature of nature. Gradually, through a process of statistical inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches 100 per cent. The Bayesian viewpoint is that we learn about the universe and everything in it through approximation, getting closer and closer to the truth as we gather more evidence. The Bayesian view of the world thus sees rationality probabilistically.

I propose that we apply the same Bayesian perspective to Immanuel Kant’s duty-based ‘Categorical Imperative.’ This can be summarised in the form: ‘Act only according to that maxim which you could simultaneously will to be a universal law.’ On this basis, to lie or to break a promise doesn’t work as a general code of conduct, because if everyone lied or broke their promises, then the very concept of telling the truth or keeping one’s promises would be turned on its head. A society that operated according to the universal principle of lying or promise-breaking would be unworkable. Kant thus argues that we have a perfect duty not to lie or break our promises, or indeed do anything else that we could not justify being turned into a universal law.

The problem with this approach in many eyes is that it is too restrictive. If a crazed gunman demands that you reveal which way his potential victim has fled, you must not lie to save him because the lying could not reasonably be universalisable as a rule of behaviour.

I propose that the application of a justification argument can solve the problem. This argument from justification, which I propose, is that we have no duty to respond to anything which is posed without reasonable appeal to duty. So, in this example, the gunman has no reasonable appeal to a duty of truth-telling from us, so we can make an exception to the general rule.

In any case, we need to assess the practical implications of Kant’s ‘universal law’ maxim from a probabilistic perspective. In the great majority of situations, we have no defence based on the argument from justification for lying or breaking a promise. So the universal expectation is that truth-telling and promise-keeping is overwhelmingly probable. The more often this turns out to be true in practice, the closer this approach converges on Kant’s absolute imperative by a process of simple Bayesian updating. As such, this is the ethical default position, a default position based on appeal to rules of conduct which should be universally willable as a general rule of behaviour, or are unreasonable to be rejected as such. Because it is rare that we need to deviate from this, the loss to the general good arising from the reduction in the credibility of truth-telling is commensurately less.

In a world in which ethics is indeed based on duty, I suggest in any case that the broader conception of duty, including the appeal to the argument from justification, should inform our actions. As long as this is clearly formulated within the universal law, i.e. tell the truth except where the person asking you to do so has no right to ask it of you, the core ethical rule of action is not weakened by lying in the crazed gunman example.

But can we use this sort of approach to arrive in principle on an ethical framework to which all reasonable people might ascribe?

I suggest the following.

First, that there are certain principles, most fundamentally that we owe each person a duty of respect to be treated as an end in themselves, based on our equal status as people, and not as a means to an end. This is most clearly seen if we are asked to decide on the merit of this through a ‘veil of ignorance’ as to our position in the world. Secondly, we should adopt ethical codes of behaviour based on the principle that these principles would be agreeable to all reasonable people, or could not be rejected by all reasonable people.

As such, we have Kant’s framework of ethical imperatives, as well as T.M. Scanlon’s idea of a contract between human beings based on what we owe each other, which he summarizes as that everyone ought to follow the principles that no one could reasonably reject.

As such, cheating, lying, breaking promises are uncontroversial as core ethical principles, because they obey our duty to treat others as ends, not means to an end, and because a world in which cheating, lying an breaking promises is not the norm is not a world in which things go for the best. So we should aim to obey these rules. Indeed, by the principle of simple Bayesian updating, we can demonstrate that on the great majority of occasions, this works, and so we converge on these principles of behaviour as the default position. If there arise occasions when it is clear that adherence to the principle does not work for the best, however, we may have a duty to deviate from the default position, but only on the basis of committed and sure deliberation that the exception is warranted. It is not a position to be taken lightly.

The example of the crazed gunman demanding to know where the potential victim has fled is one such example. The value of the duty to the potential victim as a unique human being, as well as the damage to overall well-being, will likely outweigh the value of the consequent reduction in the value of truth-telling, though no decision to deviate from the default position should be taken lightly. There is the additional issue of the argument from justification to consider here, i.e. that the duty to tell the truth is conflicted when the person demanding you tell the truth has no moral justification to do so.

Take another example. By crushing a person’s foot in a sophisticated explosive device, you will save the lives of fifty people. The default position is not to use another person as means, but this conflicts with the duty to protect others as well.

Derek Parfit seeks to reconcile the ethical theories derivable from Kantian deontology, Scanlon-type contractualism, and consequentialism, into a so-called ‘Triple Theory’, i.e.

An act is wrong if and only if, or just when, such acts are disallowed by some principle that is:

a. One of the principles whose being universal laws would make things go best.

b. One of the only principles whose being universal laws everyone could rationally will.

c. A principle that no one could reasonably reject.

To the duty-based approach, I would add the argument from justification, i.e. there is no duty to respond to any request which is posed without reasonable appeal to duty.

The next step is to identify actual examples of differences of view or belief or action, and determine whether we can resolve these differences through a synthesis of re-configured (by appeal to justification) Kantian deontology, contractualism and consequentialism. We could call this Adapted Kantian rule consequentialism, mediated through a Bayesian filter.

To do so, I will consider the well-known stylised examples of the Trolley Problem and the Bridge Problem. In one version of the Trolley Problem, a trolley on a rail track is heading straight for five unsuspecting people, but a switch can be thrown to divert this to hit just one person. Is it right, with no other knowledge, to divert the train? In the Bridge Problem a very heavy man can be pushed off a bridge without his knowledge to prevent a runaway locomotive from striking and killing, say, five people on the track.

Are either of these scenarios consistent with adapted Kantian rule consequentialism? In the case of the Bridge, the default position is clearly that it is wrong to deliberately take someone’s life, which is an extreme case of interfering with the human right to be considered an end and not a means to an end, and would in almost all cases be the default position. Appeal to a consequentialist viewpoint, based on the saving of five lives for one, would seem to conflict with the idea that this sort of action would make things go best if adopted as a general rule of behaviour. A world so structured ethically would very arguably not be one in which things go best. What if the pushing of the man off the bridge crushed an explosive device that would have saved a million lives? The question that arises here is whether it is right to take each case on its merits.

To take each case on its merits, with a view to the actual consequences in each case, is an act utilitarian ethical prescription. This is, however, a very damaging ethical prescription, as it leaves the default rule very seriously, if not fatally weakened, which in the bigger picture would conflict with the objective of making things go best.

This leaves us with a very difficult ethical dilemma. In the first case, saving the five people, it is reasonable to reject the pushing of the man off the bridge as a part of the universal rule that people be treated as ends not means. Killing one man to save a million would mean weakening the default position, just as would the act of judging the right to take an innocent life on a case by case basis..

The Trolley Problem is easier, because it is not a matter of deliberately killing someone, but saving the five by diverting the train from their path. It is unfortuitous that the one person is on the other track, but it is not a deliberate intention to kill that person to save the others. In effect, though, the outcome is the same, so the question is whether the intention matters. It does if we address this in terms of willingness to make the action a universalisable code of behaviour.

In both cases, though, we need to consider the value of the default position, and how much damage is incurred to making things go for the best if we act in such a way as to weaken or even destroy this default position. It is this consideration which, it seems to me, lies at the heart of synthesising deontological (duty-based) and contractualist ethics-based criteria of behaviour and those based on pure consequentialism, such as maximising the greatest happiness of the greatest number. In particular, damage to the default position works to weaken the consequentialist outcome, and the more damage is done to it, the more damage is done on its own terms to the consequentialist measurement of outcome .

It is this default-based ethical calculus which to me synthesises the deontological, contractualist and consequentialist ethical frameworks, and differences of opinion in the proper ethical judgement of good behaviour and bad behaviour derives, it seems to me, from differences in the value attached to the maintenance of this default.

So belief in the absolute value of the default position would mean that it is never right to push the man off the bridge, however many millions are saved.

In setting the value of the default position, this might be set by a calculus based on weighing the overall loss to the sum of well-being from weakening confidence and trust in the default position with the directly caused gain to the well-being of those who benefit from the action. This would be a fundamentally consequentialist view of the world, albeit grounded in the strategic benefits to well-being of a universally trusted default position. The value of the default position might on the other hand be considered absolute, axiomatically held, that it is always wrong, say, to sacrifice an innocent and unwilling person to save any number of lives, or even to use a human being as a means to achieve a goal based on some wider conception of general well-being.

Insofar as these differences are axiomatically held, no resolution can be achieved. It might, on the other hand, be the case that the differences are the result of faulty reasoning or evidence. Either way, consideration through the lens of this default-based ethical framework might help clarify the reason for differences of view, belief and action, and even change those views, beliefs or actions.

One task now is to apply this ethical lens, not least to some of the great moral and touchstone issues which divide opinion, with a view to at least making progress in resolving differences of view, belief and action.

Why is there Something rather than Nothing? A Solution.

It shouldn’t be possible for us to exist. But we do. That’s the sort of puzzle I like exploring. So I will. Let’s start with the so-called ‘Cosmological Constant.’ This is an extra term added by Einstein in working out equations in general relativity that describe a non-expanding universe. It is the value of the energy density of the vacuum of space. The cosmological constant explains why a static universe doesn’t collapse in upon itself through the action of gravity. It’s true that the force of gravity is infinitesimally small compared to the electromagnetic force, but it has a lot more influence on the universe because all the positive and negative electrical charges in the universe somehow seem to balance out. Indeed, if there were just a 0.00001 per cent difference in the net positive and negative electrical charges within a body, it would be torn apart and cease to exist. This cosmological constant, therefore, is added to the laws of physics simply to balance the force of gravity contributed by all of the matter in the universe. What it represents is a sort of unobserved ‘energy’ in the vacuum of space which possesses density and pressure, which prevents a static universe from collapsing in upon itself. But we now know from observation that galaxies are actually moving away from us and that the universe is expanding. In fact, the Hubble Space Telescope observations in 1998 of very distant supernovae showed that the Universe is expanding more quickly now than in the past. So the expansion of the Universe has not been slowing due to gravity, but has been accelerating. We also know how much unobserved energy there is because we know how it affects the Universe’s expansion. But how much should there be? We can calculate this using quantum mechanics. The easiest way to picture this is to visualise ‘empty space’ as containing ‘virtual’ particles that continually form and then disappear. This ‘empty space’, it turns out, ‘weighs’ 10 to the power of 93 grams per cubic centimetre. Yet the actual figure differs from that predicted by a factor of 10 to the power of 120. The ‘vacuum energy density’ as predicted is simply 10120 times too big. That’s a 1 with 120 zeros after it. So there is something cancelling out all this energy, to make it 10 to the power of 120 smaller in practice than it should be in theory. In other words, the various components of vacuum energy are arranged so that they essentially cancel out.

Now this is very fortuitous. If the cancellation figure was one power of ten different, 10 to the power of 119, then galaxies could not form, as matter would not be able to condense, so no stars, no planets, no life. So we are faced with the mindboggling fact that the positive and negative contributions to the cosmological constant cancel to 120 digit accuracy, yet fail to cancel beginning at the 121st digit. In fact, the cosmological constant must be zero to within one part in roughly 10120 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have collapsed upon itself long ago. How likely is this by chance? Essentially, it is the equivalent of tossing a coin and needing to get heads 400 times in a row and achieving it. Go on. Do you feel lucky? Now, that’s just one constant that needs to be just right for galaxies and stars and planets and life to exist. There are quite a few, independent of this, which have to be equally just right, but this I think sets the stage. I’ve heard this called the fine-tuning argument.

There is also the symmetry/asymmetry paradox which has recently drawn increasing attention. When symmetry is required of the Universe, for example in a perfect balance of positive and negative charge, conservation of electric charge is critically ensured. If there were an equal number of protons and antiprotons, of matter and antimatter, produced by the Big Bang, they would have annihilated each other, leaving a Universe empty of its atomic building blocks. Fortuitously for the existence of a live Universe, protons actually outnumbered antiprotons by a factor of just one in one billion. If the perfect symmetry of the charge and almost vanishingly tiny asymmetry of matter and antimatter were reversed, if protons and antiprotons had not differed in number by that one part in a billion, there would be no galaxies, no stars, no planets, no life, no consciousness, no question for us to consider.

In summary, then, if the conditions in the Big Bang which started our Universe had been even a tiniest of a tiniest of a tiny bit different, with regard to a number of independent physical constants, our Universe would not have been able to exist, let alone allow living beings to exist within it. So why are they so right?

Let us first tackle those (I’ve met a few) who say that if they hadn’t been right we would not have been able to even ask the question. This sounds a clever point but in fact it is not. For example it would be absolutely bewildering how I could have survived a fall out of an aeroplane from 39,000 feet onto tarmac without a parachute, but it would still be a question very much in need of an answer. To say that I couldn’t have posed the question if I hadn’t survived the fall is no answer at all.

Others propose the argument that since there must be some initial conditions, these conditions which gave rise to the Universe and life within it possible were just as likely to prevail as any others, so there is no puzzle to be explained.

But this is like saying that there are two people, Jack and Jill, who are arguing over whether Jill can control whether a fair coin lands heads or tails. Jack challenges Jill to toss the coin 400 times. He says he will be convinced of Jill’s amazing skill if she can toss heads followed by tails 200 times in a row, and she proceeds to do so. Jack could now argue that a head was equally likely as a tail on every single toss of the coin, so this sequence of heads and tails was, in retrospect, just as likely as any other outcome. But clearly that would be a very poor explanation of the pattern that just occurred. That particular pattern was clearly not produced by coincidence. Yet it’s the same argument as saying that it is just as likely that the initial conditions were just right to produce the Universe and life to exist as that any of the other pattern of billions of initial conditions that would not have done so. There may be a reason for the pattern that was produced, but it needs a more profound explanation than proposing that it was just coincidence.

A second example. There is one lottery draw, devised by an alien civilisation. The lottery balls, numbered from 1 to 59, are to be drawn, and the only way that we will escape destruction, we are told, is if the first 59 balls out of the drum emerge as 1 to 59 in sequence. The numbers duly come out in that exact sequence. Now that outcome is no less likely than any other particular sequence, so if it came out that way a sceptic could claim that we were just lucky. That would clearly be nonsensical. A much more reasonable and sensible conclusion, of course, is that the aliens had rigged the draw to allow us to survive!

So the fact that the initial conditions are so fine-tuned deserves an explanation, and a very good one at that. It cannot be simply dismissed as a coincidence or a non-question.

An explanation that has been proposed that does deserve serious scrutiny is that there have been many Big Bangs, with many different initial conditions. Assuming that there were billions upon billions of these, eventually one will produce initial conditions that are right for the Universe to at least have a shot at existing.

In this apparently theory, we are essentially proposing a process statistically along the lines of aliens drawing lottery balls over and over again, countless times, until the numbers come out in the sequence 1 to 59.

On this basis, a viable Universe could arise out of re-generating the initial conditions at the Big Bang until one of the lottery numbers eventually comes up. Is this a simpler explanation of why our Universe and life exists than an explanation based on a primal cause, and in any case does simplicity matter as a criterion of truth? This is the first question and it is usually accepted in the realm of scientific enquiry. A simpler explanation of known facts is usually accepted as superior to a more complex one.

Of course, the simplest state of affairs would be a situation in which nothing had ever existed. This would also be the least arbitrary, and certainly the easiest to understand. Indeed, if nothing had ever existed, there would have been nothing to be explained. Most critically, it would solve the mystery of how things could exist without their existence having some cause. In particular, while it is not possible to propose a causal explanation of why the whole Universe or Universes exists, if nothing had ever existed, that state of affairs would not have needed to be caused. This is not helpful to us, though, as we know that in fact at least one Universe does exist.

Take the opposite extreme, where every possible Universe exists, underpinned by every possible set of initial conditions. In such a state of affairs, most of these might be subject to different fundamental laws, governed by different equations, composed of different elemental matter. There is no reason in principle, on this version of reality, to believe that each different type of Universe should not exist over and over again, up to an infinite number of times, so even our own type of Universe could exist billions of billions of times, or more, so that in the limit everything that could happen has happened and will happen, over and over again. This may be a true depiction of reality, but it or anything anywhere remotely near it, seems a very unconvincing one. In any case, our sole source of understanding about the make-up of a Universe is a study of our own Universe. On what basis, therefore, can we scientifically propose that the other speculative Universes are governed by totally different equations and fundamental physical laws? They may be, but that is a heroic assumption.

Perhaps the laws are the same, but the constants that determines the relative masses of the elementary particles, the relative strength of the physical forces, and many other fundamentals, differ but not the laws themselves. If so, what is the law governing how these constants vary from Universe to universe, and where do these fundamental laws come from? From nothing? It has been argued that absolutely no evidence exists that any other Universe exists but our own, and that the reason that these unseen Universes is proposed is simply to explain the otherwise baffling problem of explaining how our Universe and life within it can exist. That may well be so, but we can park that for now as it is still at least possible that they do exist.

So let’s step away from requiring any evidence, and move on to at least admitting the possibility that there are a lot of universes, but not every conceivable universe. One version of this is that the other Universes have the same fundamental laws, subject to the same fundamental equations, and composed of the same elemental matter as ours, but differ in the initial conditions and the constants. But this leaves us with the question as to why there should be only just so many universes, and no more. A hundred, a thousand, a hundred thousand, whatever number we choose requires an explanation of why just that number. This is again very puzzling. If we didn’t know better, our best ‘a priori’ guess is that there would be no universes, no life. We happen to know that’s wrong, so that leaves our Universe; or else a limitless number of universes where anything that could happen has or will, over and over again; or else a limited number of universes, which begs the question, why just that number?

Is it because certain special features have to obtain in the initial conditions before a Universe can be born, and that these are limited in number. Let us assume this is so. This only begs the question of why these limited features cannot occur more than a limited number of times. If they could, there is no reason to believe the number of universes containing these special features would be less than limitless in number. So, on this view, our Universe exists because it contains the special features which allow a Universe to exist. But if so, we are back with the problem arising in the conception of all possible worlds, but in this case it is only our own type of Universe (i.e. obeying the equations and laws that underpin this Universe) that could exist limitless times. Again, this may be a true depiction of reality, but it seems a very unconvincing one.

The alternative is to adopt an assumption that there is some limiting parameter to the whole process of creating Universes, along some version of string theory which claims that there are a limit of 10 to the power of 500 solutions (admittedly a dizzyingly big number) to the equations that make up the so-called ‘landscape’ of reality. That sort of limiting assumption, however realistic or unrealistic it might be, would seem to offer at least a lifeline to allow us to cling onto some semblance of common sense.

Before summarising where we have got to, a quick aside on the ‘Great Filter’ idea, relating to the question of how life of any form could arise out of ‘dead stuff’ (inanimate matter) let alone in a form which could, and then did, lead to the building blocks of an evolutionary process that created consciousness. Observable civilisations don’t seem to happen much from what we know now, and possibly only once. Indeed, even in a universe that manages to exist, the mind-numbingly small improbability of getting from ‘dead stuff’ to us seems to require a series of steps of apparently astonishing improbability. The Filter refers to the causal path from simple dead matter to a visible civilisation. The underpinning logic is that almost everything that starts along this path is blocked along the way, which might be by means of one extremely hard step, or many very, very hard steps. Indeed, it’s commonly supposed that it has only once ever happened here on earth. Just exactly once, traceable so far to LUCA (our Last Universal Common Ancestor). If so, it may be why the universe out there seems for the most part to be quite dead. The biggest filter, so the argument goes, is that the origin of life from inanimate matter is itself very very very hard. It’s a sort of Resurrection but an order of magnitude harder because the ‘dead stuff’ had never been alive, and nor had anything else! And that’s just the first giant leap along the way. This is a big problem of its own but that’s for another day, so let’s leave that aside and go back a step, to the origin of the universe. Before we do so, let us as I suggested before our short detour, summarise very quickly.

Here goes. If we didn’t know better, our best guess, the simplest description of all possible realities, is that nothing exists. But we do know better, because we are alive and conscious, and considering the question. But our Universe is far, far, far too fine-tuned, by a factor of billions of billions, to exist by chance if it is the only Universe. So there must be more, if our Universe is caused by the roll of the die, a lot more. But how many more? If there is some mechanism for generating experimental universe upon universe, why should there be a limit to this process, and if there is not, that means that there will be limitless universes, including limitless identical universes, in which in principle everything possible has happened, and will happen, over and over again.

Even if we accept there is some limiter, we have to ask what causes this limiter to exist, and even if we don’t accept there is a limiter, we still need to ask what governs the equations representing the initial conditions to be as they are, to create one Universe or many. What puts life into the equations and makes a universe or universes at all? And why should the mechanism generating life into these equations have infused them with the physical laws that allow the production of any universe at all?

Some have speculated that we can create a universe or universes out of nothing, that a particle and an anti-particle, for example could in theory spontaneously be generated out of what is described as a ‘quantum vacuum’. According to this theoretical conjecture, the Universe ‘tunnelled’ into existence out of nothing.

This would be a helpful handle for proposing some rational explanation of the origin of the Universe and of space-time if a ‘quantum vacuum’ was in fact nothingness. But that’s the problem with this theoretical foray into the quantum world. In fact, a quantum vacuum is not empty or nothing in any real sense at all. It has a complex mathematical structure, it is saturated with energy fields and virtual-particle activity. In other words, it is a thing with structure and things happening in it. As such, the equations that would form the quantum basis for generating particles, anti-particles, fluctuations, a Universe, actually exist, possess structure. They are not nothingness, not a void.

To be more specific, according to relativistic quantum field theories, particles can be understood as specific arrangements of quantum fields. So one particular arrangement could correspond to there being 28 particles, another 240, another to no particles at all, and another to an infinite number. The arrangement which corresponds to no particles is known as a ‘vacuum’ state. But these relativistic quantum field theoretical vacuum states are indeed particular arrangement of elementary physical stuff, no less than so than our planet or solar system. The only case in which there would be no physical stuff would be if the quantum fields ceased to exist. But that’s the thing. They do exist. There is no something from nothing. And this something, and the equations which infuse it, has somehow had the shape and form to give rise to protons, neutrons, planets, galaxies and us.

So the question is what gives life to this structure, because without that structure, no amount of ‘quantum fiddling’ can create anything. No amount of something can be produced out of nothing. Yes, even empty space is something with structure and potential. More basically, how and why should such a thing as a ‘quantum vacuum’ even have existed, begun to exist, let alone be infused with the potential to create a Universe and conscious life out of non-conscious somethingness? Nothing comes from nothing. Uncaused, unconscious, Universe-contingent complex quantum laws and fields can not act as the Prime Mover for themselves. Such a proposition is logically incoherent. And there is no infinite regress in which the number of past events in the history of the universe is infinite. Infinity is a concept, an idea, an abstraction. To propose that it exists in the universe in reality leads to mathematically provable self-contradiction. For example, what is infinity minus infinity? So let’s be clear. Whatever the reason, whatever existed ‘before’ (in a causative sense) the ‘Big Bang’ was something real, albeit immaterial in the conventional sense of the word, as well as timeless and spaceless. Put another way, we need to ask as a start where the laws of quantum mechanics come from, or why the particular kinds of fields that exist do so, or why there should be any fields at all? These fields, these laws did exist and do exist, but from whence were they breathed? Candidates for this, this something which is immaterial, timeless and spaceless,  are abstractions, numbers, properties (such as blueness). But uncaused abstract objects, even assuming such an uncaused category actually exists, could not stand in causal relation to an effect such as the origin of the universe or anything else. Numbers, for example, can not cause anything. There is no possible world in which the number four, or the number six, or any other number or property (such as blueness) could cause or bring into effect anything, including the origin of the universe.

The other candidate for something immaterial, timeless and spaceless is disembodied mind, which could indeed stand in causal relation to an effect such as the origin of the universe, just as embodied mind (with which we are familiar) could. This sounds quite bizarre, of course, and faintly ridiculous. How can mind exist independent of body? Without getting into the philosophy of the mind-body problem, it is a question which would sound a lot stranger even a few years ago than it does now. Mind existing independently of body these days does not sound such a strange concept at all. It would in any case seem as a matter of logic to be one of these candidates and it is also, as a matter of reason, not the former. Hence it would seem, as a matter of logical coherence, if not immediately intuitively obvious, to be the latter, i.e. mind.

So there was something ‘before’ (causatively) the Big Bang, something which possessed structure, something which gave life to the equations which allowed a Universe to exist, and all life within it. And this something has produced a very fine tune, which has produced human mind and consciousness, and the ability to ask the ultimate question. To ask who or what produced a Universe in which we can pose this ultimate question, and to what end? In other words, who or what composed this very fine tune? And to ask the question, Why?

It’s a big question, I realise that, but it’s a very important question. And it’s a question to which we seem to have provided, by the application of logic and reasoning, an answer. For those still seeking some other answer, I hope the analysis and reasoning provided here has at the very least helped point the way.

Further Reading and Links

https://selectnetworks.net

Derek Parfit, ‘Why anything? Why this? Part 1. London Review of Books, 20, 2, 22 January 1998, pp. 24-27.

https://www.lrb.co.uk/v20/n02/derek-parfit/why-anything-why-this

Derek Parfit, ‘Why anything? Why this? Part 2. London Review of Books, 20, 3, 5 February 1998, pp. 22-25.

https://www.lrb.co.uk/v20/n03/derek-parfit/why-anything-why-this

John Piippo, Giving Up on Derek Parfit, July 22, 2012

http://www.johnpiippo.com/2012/07/giving-up-on-derek-parfit.html

A universe made for me? Physics, fine-tuning and life https://cosmosmagazine.com/physics/a-universe-made-for-me-physics-fine-tuning-and-life

John Horgan, ‘Science will never explain why there’s something rather than nothing’, Scientific American, April 23, 2012.

https://blogs.scientificamerican.com/cross-check/science-will-never-explain-why-theres-something-rather-than-nothing/

David Bailey, What is the cosmological constant paradox, and what is its significance? 1 January 2017. http://www.sciencemeetsreligion.org/physics/cosmo-constant.php

Fine Tuning of the Universe

http://reasonandscience.heavenforum.org/t1277-fine-tuning-of-the-universe

The Great Filter – are we almost past it? http://mason.gmu.edu/~rhanson/greatfilter.html

http://www.overcomingbias.com/2017/12/dragon-debris.html

Last Common Universal Ancestor (LUCA)

https://en.wikipedia.org/wiki/Last_universal_common_ancestor

David Albert, ‘On the Origin of Everything’, Sunday Book Review, The New York Times, March 23, 2012.

Jeb Bush is running for President. Will he win?

Now that Jeb Bush has officially announced he is seeking the nomination of the Republican party as its candidate for president of the United States, it seems a good time to ask how likely it is that he will actually become the 45th US President. How can we best answer this? A big clue can be found in a famous study of the history of political betting markets in the US, which shows that of the US presidential elections between 1868 and 1940, in only one year, 1916, did the candidate favoured in the betting end up losing, when Woodrow Wilson came from behind to upset Republican Charles E. Hughes in a very contest. Even then, they were tied in the betting by the close of polling. The power of the betting markets to assimilate the collective knowledge and wisdom of those willing to back their judgement with money has only increased in recent years as the volume of money wagered has risen dramatically, the betting exchanges alone seeing tens of millions of pounds trading on a single election. In 2004, a leading betting exchange actually hit the jackpot when its market favourite won every single state in that year’s election. The power of the markets has been repeated in every presidential election since. For example, in 2008, the polls had shown both John McCain and Barack Obama leading at different times during the campaign, while the betting markets always had Obama as firm favourite. Indeed, on polling day, he was as short as 20 to 1 on to win with the betting exchanges, while some polling still had it well within the margin of error. In the event, Obama won by a clear 7.2%. By 365 Electoral College Votes to 173. In 2012, Barack Obama led Mitt Romney by just 0.7% in the national polling average on election day, with major pollsters Gallup and Rasmussen showing Romney ahead. British bookmakers were quoting the president at 5 to 1 on (£5 to win £1). Indeed, Forbes reflected the view of most informed observers, declaring that: With one day to go before the election, we’re becoming super-saturated with poll data predicting a squeaker in the race for president. Meanwhile, bookmakers and gamblers are increasingly certain Obama will hang on to the White House. He went on to win by 3.9% and by 332 Electoral College Votes to 206. What is happening here is that the market is tapping into the collective wisdom of myriad minds who feed in the best information and analysis they can because their own financial rewards depend directly upon this. As such, it is a case of “follow the money” because those who know the most, and are best able to process the available information, tend to bet the most. Moreover, the lower the transaction costs (in the UK the betting public do not pay tax on their bets) and information costs (in never more plentiful supply due to the Internet) the more efficient we might expect betting markets to become in translating information today into forecasts of tomorrow. For these reasons, modern betting markets are likely to provide better forecasts than they have done ever before. In this sense, the market is like one mind that combines the collective wisdom of everybody. So what does this brave new world of prediction markets tell us about the likely Republican nominee in 2016? Last time, they were telling us all along that it would be Mitt Romney. This time the high-tech crystal ball offered by the markets is seeing not one face, but three, and yes – Jeb Bush is one of them. But two other faces loom large alongside the latest incarnation of the Bush dynasty. One is Florida senator, Marco Rubio, and the other is the governor of Wisconsin, Scott Walker. According to the current odds, at least, it is very likely that one of these men will be the Republican nominee. According to the betting, Bush will struggle to win the influential Iowa caucus, which marks the start of the presidential election season. The arch conservative voters there are expected to go for the man from Wisconsin. New Hampshire, the first primary proper, is likely to be closer. Essentially, though, this will be a contest between the deep pockets and connections of the Bush machine, the deep appeal of Scott Walker tot the “severely conservative” (a phrase famously coined by Mitt Romney), and the appeal of Marco Rubio to those looking for solid conservative credentials matched with relative youth and charisma. By the time the race is run, the betting markets currently indicate that Bush is the name that is most likely (though by no means sure) to emerge. Rubio is likely to push him hardest – and it could be close. At current odds, though, Bush does have the best chance of all the candidates in the field of denying the Democrats, and presumably Hillary Clinton the White House. But whoever is nominated by the Republican Party, it is the Democrats who are still firm favourites to retain the keys to Washington DC’s most prestigious address.   Note A version of this blog, with links to sources, first appeared in The Conversation UK on June 15, 2015. https://theconversation.com/jeb-bush-dives-into-the-presidential-race-ask-the-betting-markets-how-hell-do-43300