Skip to content

The ‘Fine Tuned’ Universe Problem – Guide Notes.

It shouldn’t be possible for us to exist. But we do. That’s counterintuitive. Take, for example the ‘Cosmological Constant.’ What it represents is a sort of unobserved ‘energy’ in the vacuum of space which possesses density and pressure, which prevents a static universe from collapsing in upon itself. We know how much unobserved energy there is because we know how it affects the Universe’s expansion. But how much should there be? The easiest way to picture this is to visualise ‘empty space’ as containing ‘virtual’ particles that continually form and then disappear. This ‘empty space’, it turns out, ‘weighs’ 10 to the power of 93 grams per cubic centimetre. Yet the actual figure differs from that predicted by a factor of 10 to the power of 120. The ‘vacuum energy density’ as predicted is simply 10120 times too big. That’s a 1 with 120 zeros after it. So there is something cancelling out all this energy, to make it 10 to the power of 120 smaller in practice than it should be in theory. In other words, the various components of vacuum energy are arranged so that they essentially cancel out.

Now this is very fortuitous. If the cancellation figure was one power of ten different, 10 to the power of 119, then galaxies could not form, as matter would not be able to condense, so no stars, no planets, no life. So we are faced with the fact that the positive and negative contributions to the cosmological constant cancel to 120 digit accuracy, yet fail to cancel beginning at the 121st digit. In fact, the cosmological constant must be zero to within one part in roughly 10120 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have collapsed upon itself long ago. How likely is this by chance? Essentially, it is the equivalent of tossing a coin and needing to get heads 400 times in a row and achieving it.

Now, that’s just one constant that needs to be just right for galaxies and stars and planets and life to exist. There are quite a few, independent of this, which have to be equally just right, most notably the strength of gravity and of the strong nuclear force relative to electromagnetism and the observed strength of the weak nuclear force. Others include the difference between the masses of the two lightest quarks and the mass of the electron relative to the quark masses, the value of the global cosmic energy density in the very early universe, and the relative amplitude of density fluctuations in the early universe. If any of these constants had been slightly different, stars and galaxies could not have formed.

There is also the symmetry/asymmetry paradox. When symmetry is required of the Universe, for example in a perfect balance of positive and negative charge, conservation of electric charge is critically ensured. If there were an equal number of protons and antiprotons, of matter and antimatter, produced by the Big Bang, they would have annihilated each other, leaving a Universe empty of its atomic building blocks. Fortuitously for the existence of a live Universe, protons actually outnumbered antiprotons by a factor of just one in one billion. If the perfect symmetry of the charge and almost vanishingly tiny asymmetry of matter and antimatter were reversed, if protons and antiprotons had not differed in number by that one part in a billion, there would be no galaxies, no stars, no planets, no life, no consciousness, no question for us to consider.

In summary, then, if the conditions in the Big Bang which started our Universe had been even a tiniest of a tiniest of a tiny bit different, with regard to a number of independent physical constants, our galaxies, stars and planets would not have been able to exist, let alone lead to the existence of living, thinking, feeling things. So why are they so right?

Let us first tackle those who say that if they hadn’t been right we would not have been able to even ask the question. This sounds a clever point but in fact it is not. For example it would be absolutely bewildering how I could have survived a fall out of an aeroplane from 39,000 feet onto tarmac without a parachute, but it would still be a question very much in need of an answer. To say that I couldn’t have posed the question if I hadn’t survived the fall is no answer at all.

Others propose the argument that since there must be some initial conditions, these conditions which gave rise to the Universe and life within it possible were just as likely to prevail as any others, so there is no puzzle to be explained.

But this is like saying that there are two people, Jack and Jill, who are arguing over whether Jill can control whether a fair coin lands heads or tails. Jack challenges Jill to toss the coin 400 times. He says he will be convinced of Jill’s amazing skill if she can toss heads followed by tails 200 times in a row, and she proceeds to do so. Jack could now argue that a head was equally likely as a tail on every single toss of the coin, so this sequence of heads and tails was, in retrospect, just as likely as any other outcome. But clearly that would be a very poor explanation of the pattern that just occurred. That particular pattern was clearly not produced by coincidence. Yet it’s the same argument as saying that it is just as likely that the initial conditions were just right to produce the Universe and life to exist as that any of the other pattern of billions of initial conditions that would not have done so. There may be a reason for the pattern that was produced, but it needs a more profound explanation than proposing that it was just coincidence.

A second example. There is one lottery draw, devised by an alien civilisation. The lottery balls, numbered from 1 to 59, are to be drawn, and the only way that we will escape destruction, we are told, is if the first 59 balls out of the drum emerge as 1 to 59 in sequence. The numbers duly come out in that exact sequence. Now that outcome is no less likely than any other particular sequence, so if it came out that way a sceptic could claim that we were just lucky. That would clearly be nonsensical. A much more reasonable and sensible conclusion, of course, is that the aliens had rigged the draw to allow us to survive!

So the fact that the initial conditions are so fine-tuned deserves an explanation, and a very good one at that. It cannot be simply dismissed as a coincidence or a non-question.

An explanation that has been proposed that does deserve serious scrutiny is that there have been many Big Bangs, with many different initial conditions. Assuming that there were billions upon billions of these, eventually one will produce initial conditions that are right for the Universe to at least have a shot at existing.

In this apparently theory, we are essentially proposing a process statistically along the lines of aliens drawing lottery balls over and over again, countless times, until the numbers come out in the sequence 1 to 59.

On this basis, a viable Universe could arise out of re-generating the initial conditions at the Big Bang until one of the lottery numbers eventually comes up. Is this a simpler explanation of why our Universe and life exists than an explanation based on a primal cause, and in any case does simplicity matter as a criterion of truth? This is the first question and it is usually accepted in the realm of scientific enquiry. A simpler explanation of known facts is usually accepted as superior to a more complex one.

Of course, the simplest state of affairs would be a situation in which nothing had ever existed. This would also be the least arbitrary, and certainly the easiest to understand. Indeed, if nothing had ever existed, there would have been nothing to be explained. Most critically, it would solve the mystery of how things could exist without their existence having some cause. In particular, while it is not possible to propose a causal explanation of why the whole Universe or Universes exists, if nothing had ever existed, that state of affairs would not have needed to be caused. This is not helpful to us, though, as we know that in fact at least one Universe does exist.

Take the opposite extreme, where every possible Universe exists, underpinned by every possible set of initial conditions. In such a state of affairs, most of these might be subject to different fundamental laws, governed by different equations, composed of different elemental matter. There is no reason in principle, on this version of reality, to believe that each different type of Universe should not exist over and over again, up to an infinite number of times, so even our own type of Universe could exist billions of billions of times, or more, so that in the limit everything that could happen has happened and will happen, over and over again. This may be a true depiction of reality, but it or anything anywhere remotely near it, seems a very unconvincing one. In any case, our sole source of understanding about the make-up of a Universe is a study of our own Universe. On what basis, therefore, can we scientifically propose that the other speculative Universes are governed by totally different equations and fundamental physical laws? They may be, but that is a heroic assumption.

Perhaps the laws are the same, but the constants that determines the relative masses of the elementary particles, the relative strength of the physical forces, and many other fundamentals, differ but not the laws themselves. If so, what is the law governing how these constants vary from Universe to universe, and where do these fundamental laws come from? From nothing? It has been argued that absolutely no evidence exists that any other Universe exists but our own, and that the reason that these unseen Universes is proposed is simply to explain the otherwise baffling problem of explaining how our Universe and life within it can exist. That may well be so, but we can park that for now as it is still at least possible that they do exist.

So let’s step away from requiring any evidence, and move on to at least admitting the possibility that there are a lot of universes, but not every conceivable universe. One version of this is that the other Universes have the same fundamental laws, subject to the same fundamental equations, and composed of the same elemental matter as ours, but differ in the initial conditions and the constants. But this leaves us with the question as to why there should be only just so many universes, and no more. A hundred, a thousand, a hundred thousand, whatever number we choose requires an explanation of why just that number. This is again very puzzling. If we didn’t know better, our best ‘a priori’ guess is that there would be no universes, no life. We happen to know that’s wrong, so that leaves our Universe; or else a limitless number of universes where anything that could happen has or will, over and over again; or else a limited number of universes, which begs the question, why just that number?

Is it because certain special features have to obtain in the initial conditions before a Universe can be born, and that these are limited in number. Let us assume this is so. This only begs the question of why these limited features cannot occur more than a limited number of times. If they could, there is no reason to believe the number of universes containing these special features would be less than limitless in number. So, on this view, our Universe exists because it contains the special features which allow a Universe to exist. But if so, we are back with the problem arising in the conception of all possible worlds, but in this case it is only our own type of Universe (i.e. obeying the equations and laws that underpin this Universe) that could exist limitless times. Again, this may be a true depiction of reality, but it seems a very unconvincing one.

The alternative is to adopt an assumption that there is some limiting parameter to the whole process of creating Universes, along some version of string theory which claims that there are a limit of 10 to the power of 500 solutions (admittedly a dizzyingly big number) to the equations that make up the so-called ‘landscape’ of reality. That sort of limiting assumption, however realistic or unrealistic it might be, would seem to offer at least a lifeline to allow us to cling onto some semblance of common sense.

Before summarising where we have got to, a quick aside on the ‘Great Filter’ idea, relating to the question of how life of any form could arise out of inanimate matter, and ultimately to human consciousness. Observable civilisations don’t seem to happen much from what we know now, and possibly only once. Indeed, even in a universe that manages to exist, the mind-numbingly small improbability of getting from inanimate matter to conscious humans seems to require a series of steps of apparently astonishing improbability. The Filter refers to the causal path from simple inanimate matter to a visible civilisation. The underpinning logic is that almost everything that starts along this path is blocked along the way, which might be by means of one extremely hard step, or many very, very hard steps. Indeed, it’s commonly supposed that it has only once ever happened here on earth. Just exactly once, traceable so far to LUCA (our Last Universal Common Ancestor). If so, it may be why the universe out there seems for the most part to be quite dead. The biggest filter, so the argument goes, is that the origin of life from inanimate matter is itself very, very, very hard. It’s a sort of Resurrection but an order of magnitude harder because the ‘dead stuff’ had never been alive, and nor had anything else! And that’s just the first giant leap along the way. This is a big problem of its own but that’s for another day, so let’s leave that aside and go back a step, to the origin of the universe. Before we do so, let us as I suggested before our short detour, summarise very quickly.

Here goes. If we didn’t know better, our best guess, the simplest description of all possible realities, is that nothing exists. But we do know better, because we are alive and conscious, and considering the question. But our Universe is far, far, far too fine-tuned, by a factor of billions of billions, to exist by chance if it is the only Universe. So there must be more, if our Universe is caused by the roll of the die, a lot more. But how many more? If there is some mechanism for generating experimental universe upon universe, why should there be a limit to this process, and if there is not, that means that there will be limitless universes, including limitless identical universes, in which in principle everything possible has happened, and will happen, over and over again.

Even if we accept there is some limiter, we have to ask what causes this limiter to exist, and even if we don’t accept there is a limiter, we still need to ask what governs the equations representing the initial conditions to be as they are, to create one Universe or many. What puts life into the equations and makes a universe or universes at all? And why should the mechanism generating life into these equations have infused them with the physical laws that allow the production of any universe at all?

Some have speculated that we can create a universe or universes out of nothing, that a particle and an anti-particle, for example could in theory spontaneously be generated out of what is described as a ‘quantum vacuum’. According to this theoretical conjecture, the Universe ‘tunnelled’ into existence out of nothing.

This would be a helpful handle for proposing some rational explanation of the origin of the Universe and of space-time if a ‘quantum vacuum’ was in fact nothingness. But that’s the problem with this theoretical foray into the quantum world. In fact, a quantum vacuum is not empty or nothing in any real sense at all. It has a complex mathematical structure, it is saturated with energy fields and virtual-particle activity. In other words, it is a thing with structure and things happening in it. As such, the equations that would form the quantum basis for generating particles, anti-particles, fluctuations, a Universe, actually exist, possess structure. They are not nothingness, not a void.

To be more specific, according to relativistic quantum field theories, particles can be understood as specific arrangements of quantum fields. So one particular arrangement could correspond to there being 28 particles, another 240, another to no particles at all, and another to an infinite number. The arrangement which corresponds to no particles is known as a ‘vacuum’ state. But these relativistic quantum field theoretical vacuum states are indeed particular arrangement of elementary physical stuff, no less than so than our planet or solar system. The only case in which there would be no physical stuff would be if the quantum fields ceased to exist. But that’s the thing. They do exist. There is no something from nothing. And this something, and the equations which infuse it, has somehow had the shape and form to give rise to protons, neutrons, planets, galaxies and us.

So the question is what gives life to this structure, because without that structure, no amount of ‘quantum fiddling’ can create anything. No amount of something can be produced out of nothing. Yes, even empty space is something with structure and potential. More basically, how and why should such a thing as a ‘quantum vacuum’ even have existed, begun to exist, let alone be infused with the potential to create a Universe and conscious life out of non-conscious somethingness?

It is certainly a puzzle, and arguably one without an intuitive solution.

Exercise

If the conditions in the Big Bang which started our Universe had been even a tiniest of a tiniest of a tiny bit different, with regard to a number of independent physical constants, the galaxies, stars and planets would not have been able to exist. But if we didn’t exist, we couldn’t have asked the question as to why they were so right. In any case, since there must be some initial conditions, the conditions which gave rise to the Universe and life, however fortuitous, were just as likely to prevail as any others. So there is, for both reasons, no puzzle to be explained. Is this a convincing rebuttal of the ‘Fined Tuned’ universe problem. Why? Why not?

 

Reading and Links

Derek Parfit, ‘Why anything? Why this? Part 1. London Review of Books, 20, 2, 22 January 1998, pp. 24-27.

https://www.lrb.co.uk/v20/n02/derek-parfit/why-anything-why-this

Derek Parfit, ‘Why anything? Why this? Part 2. London Review of Books, 20, 3, 5 February 1998, pp. 22-25.

https://www.lrb.co.uk/v20/n03/derek-parfit/why-anything-why-this

John Piippo, Giving Up on Derek Parfit, July 22, 2012

http://www.johnpiippo.com/2012/07/giving-up-on-derek-parfit.html

A universe made for me? Physics, fine-tuning and life https://cosmosmagazine.com/physics/a-universe-made-for-me-physics-fine-tuning-and-life

John Horgan, ‘Science will never explain why there’s something rather than nothing’, Scientific American, April 23, 2012.

https://blogs.scientificamerican.com/cross-check/science-will-never-explain-why-theres-something-rather-than-nothing/

David Bailey, What is the cosmological constant paradox, and what is its significance? 1 January 2017. http://www.sciencemeetsreligion.org/physics/cosmo-constant.php

Fine Tuning of the Universe

http://reasonandscience.heavenforum.org/t1277-fine-tuning-of-the-universe

The Great Filter – are we almost past it? http://mason.gmu.edu/~rhanson/greatfilter.html

Dragon Debris?

Fine Tuning in Cosmology. Chapter 2. In: Bostrom, N. Anthropic Bias: Observation Selection Effects in Science and Philosophy. 2002. http://www.anthropic-principle.com/?q=book/chapter_2#2a

Last Common Universal Ancestor (LUCA)

https://en.wikipedia.org/wiki/Last_universal_common_ancestor

David Albert, ‘On the Origin of Everything’, Sunday Book Review, The New York Times, March 23, 2012.

Quantum World Thought Experiments – Guide Notes.

Is it possible to be both alive and dead at the same time? This is the question central to the famous Schrodinger’s Cat thought experiment. In the version posed by Erwin Schrodinger, a cat is placed in an opaque box for an hour with a small piece of radioactive material which has an equal probability of decaying or not in that time period. If some radioactivity is detected by a Geiger counter also placed in the box, a relay releases a hammer which breaks a flask of hydrocyanic acid, killing the cat. If no radioactivity is detected, the cat lives. Before we open the box at the end of the hour, we estimate the chance that the radioactive material will decay and the cat will be dead at 50/50, the same as that it will be alive. Before we open the box, however, is the cat alive (and we don’t know it yet), dead (and we don’t know it yet) or both alive and dead (until we open the box and find out).

Common sense would seem to indicate that it is either alive or dead, but we don’t know until we open the box. Traditional quantum theory suggest otherwise. The cat is both alive, with a certain probability, and dead, with a certain probability, until we open the box and find out, when it has to become one or the other with a probability of 100 per cent. In quantum terminology, the cat is in a superposition (two states at the same time) of being alive and dead, which only collapses into one state (dead or alive) when the cat is observed. This might seem absurd when applied to a cat. After all surely it was either alive or dead before we opened the box and found out. It was simply that we didn’t know which. That may be true, when applied to cats. But when applied to the microscopic quantum world, such common sense goes out the window as a description of reality. For example, photons (the smallest measure of light) can exist simultaneously in both wave and particle states, and travel in both clockwise and anti-clockwise directions at the same time. Each state exists in the same moment. As soon as the photon is observed, however, it must settle on one unique state. In other words, the common sense that we can apply to cats we cannot apply to photons or other particles at the quantum level.

So what is going on? The traditional explanation as to why the same quantum particle can exist in different states simultaneously is known as the Copenhagen Interpretation. First proposed by Niels Bohr in the early twentieth century, the Copenhagen interpretation states that a quantum particle does not exist in any one state but in all possible states at the same time, with various probabilities. It is only when we observe it that it must in effect choose which of these states it exists as. At the sub-atomic level, then, particles seem to exist in a state of what is called ‘coherent superposition’, in which they can be two things at the same time, and only become one when they are forced to do so by the act of being observed. The total of all possible states is known as the ‘wave function.’ When the quantum particle is observed, the superposition ‘collapses’ and the object is forced into one of the states that make up its wave function.

The problem with this explanation is that all these different states exist. By observing the object, it might be that it reduces down to one of these states, but what has happened to the others? Where have they disappeared to?

This question lies at the heart of the so-called ‘Quantum Suicide’ thought experiment.

It goes like this. A man (not a cat) sits down in front of a gun which is linked to a machine that measures the spin of a quantum particle (a quark). If it is measured as spinning clockwise, the gun will fire and kill the man. If it is measured as spinning anti-clockwise, it will not fire and the man will survive to undergo the same experiment again.

The question is – will the man survive, and how long will he survive for? This thought experiment, proposed by Max Tegmark, has been answered in different ways by quantum theorists depending on whether or not they adhere to the Copenhagen Interpretation. In that interpretation, the gun will go off with a certain probability, depending on which way the quark is spinning. Eventually, by the laws of chance, the man will be killed, probably sooner rather than later. A growing number of theorists believe something else, however. They see both states (the particle is spinning clockwise and spinning anti-clockwise) as equally real, so there are two real outcomes. In one world, the man dies and in the other he lives. The experiment repeats, and the same split occurs. In one world there will exist a man who survives an indefinite number of rounds. In the other worlds, he is dead.

The difference between these alternative approaches is critical. The Copenhagen approach is to propose that the simultaneously existing states (for example, the quark that is spinning both clockwise and anti-clockwise simultaneously) exist in one world, and collapse into one of these states when observed. Meanwhile, the other states mysteriously disappear. The other approach is to posit that these simultaneously existing states are real states, and neither magically disappears, but branch off into different realities when observed. What is happening is that in one world, the particle is observed spinning clockwise (in the Quantum Suicide thought experiment, the man dies) and in the other world the particle is observed spinning the other way (and the man lives). Crucially, according to this interpretation both worlds are real. In other words, they are not notional states of one world but alternative realities. This is the so-called ‘Many Worlds Theory.’

Where is the burden of proof in trying to determine which interpretation of reality is correct? This depends on whether we take the one world that we can observe as the default position or the wave function of all possible states as represented in the mathematics of the wave function as the reality. Adherents to the Many Worlds position argue that the default is to go with what is described in the mathematics underpinning quantum theory – that the wave function represents all of reality. According to this argument, the minimal mathematical structure needed to make sense of quantum mechanics is the existence of many worlds which branch off, each of which contains an alternative reality. Moreover, these worlds are real. To say that our world, the one that we are observing, is the only real one, despite all the other possible worlds or measurement outcomes, has been likened to when we believed that the Earth was at the centre of the universe. There is no real justification, according to this interpretation, for saying that our branch of all possible states is the only real one, and that all other branches are non-existent or are ‘disappeared worlds.’ Put another way, the mathematics of quantum mechanics describes these different worlds. Nothing in the maths says that this world that we observe is more real than another world. So the burden of proof is on those who say it is. The viewpoint of the Copenhagen school is diametrically opposite. They argue that the hard evidence is of the world we are in, and the burden of proof is on those positing other worlds containing other branches of reality.

Depending on which default position we choose to adopt will determine whether we are adherents of the Copenhagen or the ‘Many Worlds’ schools.

For me personally, the logic of the argument points to the Many Worlds school. But to believe that they are right, and the Copenhagen school is wrong, seems kind of crazy, and totally counter-intuitive. In another world, of course, I’m probably saying the exact opposite.

Exercise

Consider the main strength and weakness of the ‘Many Worlds’ interpretation of reality.

References and Links

Do Parallel Universes Really Exist? HowStuffWorks. https://science.howstuffworks.com/science-vs-myth/everyday-myths/parallel-universe.htm

How Quantum Suicide Works. HowStuffWorks. https://science.howstuffworks.com/innovation/science-questions/quantum-suicide.htm

 

The ‘Simulated World’ Problem – Guide Notes.

Do we live in a simulation, created by an advanced civilisation, in which we are part of some sophisticated virtual reality experience? For this to be a possibility we can make the obvious assumption that sufficiently advanced civilisations will possess the requisite computing and programming power to create what philosopher Nick Bostrom termed such ‘ancestor simulations’. These simulations would be complex enough for the minds that are simulated to be conscious and able to experience the type of experiences that we do. The creators of these simulations could exist at any stage in the development of the universe, even billions of years into the future.

The argument around simulation goes like this. One of the following three statements must be correct.

  1. That civilisations at our level of development always or almost always disappear before becoming technologically advanced enough to create these simulations.
  2. That the proportion of these technologically advanced civilisations that wish to create these simulations is zero or almost zero.
  3. That we are almost sure to be living in such a simulation.

To see this, let’s examine each proposition in turn.

  1. Suppose that the first is not true. In that case, a significant proportion of civilisations at our stage of technology go on to become technologically advanced enough to create these simulations.
  2. Suppose that the second is not true. In this case, a significant proportion of these civilisations run such simulations.
  3. If both of the above propositions are not true, then there will be countless simulated minds indistinguishable to all intents and purposes from ours, as there is potentially no limit to the number of simulations these civilisations could create. The number of such simulated minds would almost certainly be overwhelmingly greater than the number of minds that created them. Consequently, we would be quite safe in assuming that we are almost certainly inside a simulation created by some form of advanced civilisation.

For the first proposition to be untrue, civilisations must be able to go through the phase of being able to wipe themselves out, either deliberately or by accident, carelessness or neglect, and never or almost never do so. This might perhaps seem unlikely based on our experience of this world, but becomes more likely if we consider all other possible worlds.

For the second proposition to be untrue, we would have to assume that virtually all civilisations that were able to create these simulations would decide not to do so. This again is possible, but would seem unlikely.

If we consider both propositions, and we think it is unlikely that no civilisations survive long enough to achieve what Bostrom calls ‘technological maturity’, and that it is unlikely that hardly any would create ‘ancestor simulations’ if they could, then anyone considering the question is left with a stark conclusion. They really are living in a simulation.

To summarise. An advanced ‘technologically mature’ civilisation would have the capability of creating simulated minds. Based on this, at least one of three propositions must be true.

  1. The proportion of these advanced civilisations is close to zero or zero.
  2. The proportion of these advanced civilisations that wish to run these simulations is close to zero.
  3. The proportion of those consciously considering the question who are living in a simulation is close to one.

If the first of these propositions is true, we will almost certainly not survive to become ‘technologically mature.’ If the second proposition is true, virtually no advanced civilisations are interested in using their power to create such simulations. If the third proposition is true, then conscious beings considering the question are almost certainly living in a simulation.

Through the veil of our ignorance, it might seem sensible to assign equal credence to all three, and to conclude that unless we are currently living in a simulation, descendants of this civilisation will almost certainly never be in a position to run these simulations.

Strangely indeed, the probability that we are living in a simulation increases as we draw closer to the point at which we are able and willing to do so. At the point that we would be ready to create our own simulations, we would paradoxically be at the very point when we were almost sure that we ourselves were simulations. Only by refraining to do so could we in a certain sense make it less likely that we were simulated, as it would show that at least one civilisation that was able to create simulations refrained from doing so. Once we took the plunge, we would know that we were almost certainly only doing so as simulated beings. And yet there must have been someone or something that created the first simulation. Could that be us, we would be asking ourselves? In our simulated hearts and minds, we would already know the answer!

 

Exercise

With reference to Bostrom’s ‘simulation’ reasoning, generate an estimate as to the probability that we are living in a simulated world.

References and Links

The Simulation Argument. https://www.simulation-argument.com/

Do we live in a computer simulation? Nick Bostrom. New Scientist. 00Month 2006. 8-9. https://www.simulation-argument.com/computer.pdf

Are you living in a computer simulation? Bostrom, N. Philosophical Quarterly (2003). 53, 211. 243-255.

https://www.simulation-argument.com/simulation.pdf

Pascal’s Wager and Pascal’s Mugging – Guide Notes.

Blaise Pascal was a 17th century French mathematician and philosopher, who laid some of the main foundations of modern probability theory. He is particularly celebrated for his correspondence with mathematician Pierre Fermat, forever associated with Fermat’s Last Theorem. Schoolchildren learning mathematics are more familiar with him courtesy of Pascal’s Triangle. Increasingly, though, it is Pascal’s Wager, and latterly the Pascal’s Mugging puzzle, that has entertained modern philosophers. Simply stated, Pascal’s Wager can be stated thus: If God exists and you wager that He does not, your penalty relative to betting correctly is enormous. If God does not exist and you wager that He does, your penalty relative to betting correctly is inconsequential. In other words, there’s a lot to gain if it turns out He does and not much lost if He doesn’t. So, unless it can be proved that God does not exist, you should always side with him existing, and act accordingly. Put another way, Pascal points out that if a wager was between the equal chance of gaining two lifetimes of happiness and gaining nothing, then a person would be foolish to bet on the latter. The same would go if it was three lifetimes of happiness versus nothing. He then argues that it is simply unconscionable by comparison to bet against an eternal life of happiness for the possibility of gaining nothing. The wise decision is to wager that God exists, since “If you gain, you gain all; if you lose, you lose nothing”, meaning one can gain eternal life if God exists, but if not, one will be no worse off in death than by not believing. On the other hand, if you bet against God, win or lose, you either gain nothing or lose everything.

It seems intuitively like there’s something wrong with this argument. The problem arises in trying to find out what it is. One good try is known as the ‘many gods’ objection. The argument here is that one can in principle come up about with multiple different characterisations of a god, including a god that punishes people for siding with his existence. But this assumes that all representations of what God is are equally probable. In fact, some representations must be more plausible than others, if the alternatives are properly investigated. A characterisation that has hundreds of million of followers, for example, and a strongly developed set of apologetics is at least a bit more likely to be true than a theory based on an evil teapot.

Once we begin to drop the equal-probability assumption, we severely weaken the ‘many gods’ objection. Basically, if it is more likely that the God of a major established religion is possibly true (however almost vanishingly unlikely any individual might think that to be) relative to the evil teapot religion, the ‘many gods’ objection very quickly begins to crumble to dust. At that point, one needs to take seriously the stratospherically high rewards of siding with belief (at whatever long odds one might set for that) compared to the stakes.

It is true that infinities swamp decisions, but we need not even go as far as positing infinite reward for the decision problem relative to the stakes to become a relatively straightforward one. It’s also true that future rewards tend to be seriously under-weighted by most human decision-makers. In truth, pain suffered in the future will feel just as bad as pain suffered today, but most of us don’t think or behave as if that’s so. The attraction of delaying an unwelcome decision is well documented. In the immortal words of St. Augustine of Hippo in his ‘Confessions’, “Lord make me pure – but not yet!”

A second major objection is the ‘inauthentic beliefs’ criticism, that for those who cannot believe to feign belief to gain eternal reward invalidates the reward. What such critics are pointing to is the unbeliever who says to Pascal that he cannot make himself believe. Pascal’s response is that if the principle of the wager is valid, then the inability to believe is irrational. “Your inability to believe, because reason compels you to and yet you cannot, [comes] from your passions.” This inability, therefore, can be overcome by diminishing these irrational sentiments: “Learn from those who were bound like you. . . . Follow the way by which they began; by acting as if they believed.”
Even some modern atheist philosophers admit to struggling with the problem set by Blaise Pascal. One attempt to square the circle is by saying that in the world where God, as conventionally conceived, exists with a non-zero probability, there is a case for pushing a hypothetical button to make them believe if offered just one chance, and that chance was now or never. Given the chance of delaying the decision as long as possible, however, it seems they would side with St. Augustine’s approach to the matter of his purity.

Pascal’s Wager has taken on new life in the last couple of decades as it has come to be applied to the problems of existential threats like Climate Change. This issue bears a similarity to Pascal’s Wager on the existence of God. Let’s say, for example, there is only a one per cent chance that the planet is on course for catastrophic climatic disaster and that delay means passing a point of no return where we would be powerless to stop it. In that case, not acting now would seem a kind of crazy. It certainly breaches the terms of Pascal’s Wager. This has fittingly been termed Noah’s Law: If an ark may be essential for survival, get building, however sunny a day it is overhead. Yes, when the cost of getting it wrong is just too high, it probably pays to hedge your bets.

Pascal’s Mugging is a new twist on the problem, which can if wrongly interpreted give comfort to the naysayers. It can be put this way. You are offered a proposal by someone who turns up on your doorstep. Give me £10, the door-stepper says, and I will return tomorrow and give you £100. I desperately need the money today, for reasons I’m not at liberty to divulge. I can easily pay you anything you like tomorrow, though. You turn down the deal because you don’t believe he will follow through on his promise. So he asks you how likely you think it is that he will honour any deal you are offered. You say 100 to 1. In that case, I will bring you £1100 tomorrow in return for the £10. You work out the expected value of this proposal to be 1/100 times £1100 or £11, and hand over the tenner. He never comes back and you have, in a way, been intellectually mugged. But was handing over the note irrational? The mugger won the argument that for any low probability of being able to pay back a large amount of money there exists a finite amount that makes it rational to take the bet. In particular, a rational person must admit there is at least some non-zero chance that such a deal would be possible. However low the probability you assign to being paid out, you can be assigned a potential reward, which need not be monetary, which would outweigh it.

Pascal’s mugging has more generally been used to consider the appropriate course of action when confronted more systemically by low-probability, high-stakes events such as existential risk or charitable interventions with a low probability of success but extremely high rewards. Common sense might seem to suggest that spending money and effort on extremely unlikely scenarios is irrational, but since when can we trust common sense? And there’s no reason to believe that it serves us well here either.

Blaise Pascal was a very clever guy and those who over the centuries have too quickly dismissed his ideas have paid the intellectual (and perhaps a much bigger) price. Today, in an age when global existential risk is for obvious reasons (nuclear annihilation not least) a whole lot higher up the agenda than it was in Pascal’s day, it is time that we revisit (atheists, agnostics and believers alike) the lessons to be learned from ‘The Wager’, and that we do so with renewed urgency. The future of the planet just might depend on it.

 

Exercise

In the Pascal’s Mugging Problem you are offered £3,000 tomorrow if you pay the stranger £25 today. You believe that there is a 1 in 100 chance that the stranger will return to pay you.

Is handing over the £25 rational from an economic point of view? Would you hand over the £25? What if the stranger offered to pay you £10,000 tomorrow, and you believe there is a 1 in 125 chance that he will return to pay you?

Would your answer be different if any of the sums involved were different?

 

References and Links

Nick Bostrom. Pascal’s Mugging. 443-444. https://nickbostrom.com/papers/pascal.pdf

Pascal’s mugging. Wikipedia. https://en.wikipedia.org/wiki/Pascal%27s_mugging

Amanda Askell on Pascal’s Wager and other low risks with high stakes. Rationally Speaking. Podcast. http://rationallyspeakingpodcast.org/show/rs-190-amanda-askell-on-pascals-wager-and-other-low-risks-wi.html

Transcript of Amanda Askell Podcast. http://static1.1.sqspcdn.com/static/f/468275/27648050/1502083126473/rs190transcript.pdf?token=xQdh8%2B1IgicYGsJS5D%2Fa%2BB0sFMo%3D

Is Believing in God Worth It? SALIENT. http://salient.org.nz/2018/03/is-believing-in-god-worth-it/

 

 

The Strange Case of Sunrise, Sunset and the Shortest Day of the Year.

December 21st, 2018 is the shortest day of the year, at least in the UK, located in the Northern hemisphere of our planet.

So does that mean that the mornings should start to get lighter after today (earlier sunrise), as well as the evenings (later sunset). Not so, and there’s a simple reason for that. The length of a solar day, i.e. the period of time between the solar noon (the time when the sun is at its highest elevation in the sky) on one day and the next, is not 24 hours in December, but about 30 seconds longer than that.

For this reason, the days get progressively about 30 seconds longer throughout December, so that by the end of the month a standard 24-hour clock is lagging roughly 15 minutes behind real solar time.

Let’s say just for a moment that the hours of sunlight (the time difference between sunrise and sunset) stayed constant through December. This means that a 24-hour clock which timed sunset at 3.50pm one day would be 30 seconds slow by 3.50pm the next day. The solar day would be 30 seconds longer than this, so the sun would not set the next day till 3.50pm and 30 seconds. After ten days the sun would not set till 3.55pm according to the 24-hour clock. So the sunset would actually get later through all of December. For the same reason, the sunrise would get later through the whole of December.

In fact, the sunset doesn’t get progressively later through all of December because the hours of sunlight shorten for about the first three weeks. The effect of this is that the sun would set earlier and rise later.

These two things (the shortening hours of sunlight and the extended solar day) work in the opposite direction. The overall effect is that the sun starts to set later from a week or so before the shortest day, but doesn’t start to rise earlier till about a week or so after the shortest day.

So the old adage that that the evenings will start to draw out after the end of the third week of December or so, and the mornings will get lighter, is false. The evenings have already been drawing out for several days before the shortest day, and the mornings will continue to grow darker for several days more.

There’s one other curious thing. The solar noon coincides with noon on our 24-hour clocks just four times a year. One of those days is Christmas Day! So set your clock to noon on December 25th, look up to the sky and you will see the sun at its highest point. Just perfect!

 

Links

http://www.timeanddate.com/astronomy/uk/nottingham

http://www.bbc.co.uk/news/magazine-30549149

http://www.rmg.co.uk/explore/astronomy-and-time/time-facts/the-equation-of-time

http://en.wikipedia.org/wiki/Solar_time

http://earthsky.org/earth/everything-you-need-to-know-december-solstice

The US mid-term elections: a triumph for political forecasting.

The results of the US midterm elections are now largely in and they came as a shock to many seasoned forecasters.

This wasn’t the kind of shock that occurred in 2016, when the EU referendum tipped to Brexit and the US presidential election to Donald Trump. Nor the type that followed the 2015 and 2017 UK general elections, which produced a widely unexpected Conservative majority and a hung parliament respectively.

On those occasions, the polls, pundits and prediction markets got it, for the most part, very wrong, and confidence in political forecasting took a major hit. The shock on this occasion was of a different sort – surprise related to just how right most of the forecasts were.

Take the FiveThirtyEight political forecasting methodology, most closely associated with Nate Silver, famed for the success of his 2008 and 2012 US presidential election forecasts.

In 2016, even that trusted methodology failed to predict Trump’s narrow triumph in some of the key swing states. This was reflected widely across other forecasting methodologies, too, causing a crisis of confidence in political forecasting. And things only got worse when much academic modelling of the 2017 UK general election was even further off targetthan it had been in 2015.

How did it go so right?

So what happened in the 2018 US midterm elections? This time, the FiveThirtyEight “Lite” forecast, based solely on local and national polls weighted by past performance, predicted that the Democrats would pick up a net 38 seats in the House of Representatives. The “Classic” forecast, which also includes fundraising, past voting and historical trends, predicted that they would pick up a net 39 seats. They needed 23 to take control.


Read more: Women candidates break records in the 2018 US midterm elections


With almost all results now declared, it seems that those forecasts are pretty near spot on the projected tally of a net gain of 40 seats by the Democrats. In the Senate, meanwhile, the Republicans were forecast to hold the Senate by 52 seats to 48. The final count is likely to be 53-47. There is also an argument that the small error in the Senate forecast can be accounted for by poor ballot design in Florida, which disadvantaged the Democrat in a very close race.

Some analysts currently advocate looking at the turnout of “early voters”, broken down by party affiliation, who cast their ballot before polling day. They argue this can be used as an alternative or supplementary forecasting methodology. This year, a prominent advocate of this methodology went with the Republican Senate candidate in Arizona, while FiveThirtyEight chose the Democrat. The Democrat won. Despite this, the jury is still out over whether “early vote” analysis can add any value.

There has also been research into the forecasting efficiency of betting/prediction markets compared to polls. This tends to show that the markets have the edge over polls in key respects, although they can themselves be influenced by and overreact to new poll results.

There are a number of theories to explain what went wrong with much of the forecasting prior to the Trump and Brexit votes. But looking at the bigger picture, which stretches back to the US presidential election of 1868 (in which Republican Ulysses S Grant defeated Democrat Horatio Seymour), forecasts based on markets (with one notable exception, in 1948) have proved remarkably accurate, as have other forecasting methodologies. To this extent, the accurate forecasting of the 2018 midterms is a return to the norm.

And the next president is …

But what do the results mean for politics in the US more generally? The bottom line is that there was a considerable swing to the Democrats across most of the country, especially among women and in the suburbs, such that the Republican advantage of almost 1% in the House popular vote in 2016 was turned into a Democrat advantage of about 8% this time. If reproduced in a presidential election, it would be enough to provide a handsome victory for the candidate of the Democratic Party.


The size of this swing, and the demographics underpinning it, were identified with a good deal of accuracy by the main forecasting methodologies. This success has clearly restored some confidence in them, and they will now be used to look forward to 2020. Useful current forecasts for the 2020 election include PredictIt, OddsChecker, Betfairand PredictWise.

Taken together, they indicate that the Democratic candidate for the presidency will most likely come from a field including Senators Kamala Harris (the overall favourite), Bernie Sanders, Elizabeth Warren, Amy Klobuchar, Kirsten Gillibrand and Cory Booker. Outside the Senate, the frontrunners are former vice-president, Joe Biden, and the recent (unsuccessful) candidate for the Texas Senate, Beto O’Rourke.

Whoever prevails is most likely to face sitting president, Donald Trump, who is close to even money to face impeachment during his current term of office. If Trump isn’t the Republican nominee, the vice-president, Mike Pence, and former UN ambassador Nikki Haley are attracting the most support in the markets. The Democrats are currently about 57% to 43% favourites over the Republicans to win the presidency.

With the midterms over, our faith in political forecasting, at least in the US, has been somewhat restored. The focus now turns to 2020 – and whether they’ll accurately predict the next leader of the free world, or be left floundering by the unpredictable forces of a new world politics.

Is Schrodinger’s Cat Dead? Mystery in the Quantum World

Is it possible to be both alive and dead at the same time? This is the question central to the famous Schrödinger’s Cat thought experiment. In the version posed by Erwin Schrödinger, a cat is placed in an opaque box for an hour with a small piece of radioactive material which has an equal probability of decaying or not in that time period. If some radioactivity is detected by a Geiger counter also placed in the box, a relay releases a hammer which breaks a flask of hydrocyanic acid, killing the cat. If no radioactivity is detected, the cat lives. Before we open the box at the end of the hour, we estimate the chance that the radioactive material will decay and the cat will be dead at 50/50, the same as that it will be alive. Before we open the box, however, is the cat alive (and we don’t know it yet), dead (and we don’t know it yet) or both alive and dead (until we open the box and find out).

Common sense would seem to indicate that it is either alive or dead, but we don’t know until we open the box. Traditional quantum theory suggests otherwise. The cat is both alive, with a certain probability, and dead, with a certain probability, until we open the box and find out, when it has to become one or the other with a probability of 100 per cent. In quantum terminology, the cat is in a superposition (two states at the same time) of being alive and dead, which only collapses into one state (dead or alive) when the cat is observed. This might seem absurd when applied to a cat. After all surely it was either alive or dead before we opened the box and found out. It was simply that we didn’t know which. That may be true, when applied to cats. But when applied to the microscopic quantum world, such common sense goes out the window as a description of reality. For example, photons (the smallest measure of light) can exist simultaneously in both wave and particle states, and travel in both clockwise and anti-clockwise directions at the same time. Each state exists in the same moment. As soon as the photon is observed, however, it must settle on one unique state. In other words, the common sense that we can apply to cats we cannot apply to photons or other particles at the quantum level.

So what is going on? The traditional explanation as to why the same quantum particle can exist in different states simultaneously is known as the Copenhagen Interpretation. First proposed by Niels Bohr in the early twentieth century, the Copenhagen interpretation states that a quantum particle does not exist in any one state but in all possible states at the same time, with various probabilities. It is only when we observe it that it must in effect choose which of these states it exists as. At the sub-atomic level, then, particles seem to exist in a state of what is called ‘coherent superposition’, in which they can be two things at the same time, and only become one when they are forced to do so by the act of being observed. The total of all possible states is known as the ‘wave function.’ When the quantum particle is observed, the superposition ‘collapses’ and the object is forced into one of the states that make up its wave function.

The problem with this explanation is that all these different states exist. By observing the object, it might be that it reduces down to one of these states, but what has happened to the others? Where have they disappeared to?

This question lies at the heart of the so-called ‘Quantum Suicide’ thought experiment.

It goes like this. A man (not a cat) sits down in front of a gun which is linked to a machine that measures the spin of a quantum particle (a quark). If it is measured as spinning clockwise, the gun will fire and kill the man. If it is measured as spinning anti-clockwise, it will not fire and the man will survive to undergo the same experiment again.

The question is – will the man survive, and how long will he survive for? This thought experiment, proposed by Max Tegmark, has been answered in different ways by quantum theorists depending on whether or not they adhere to the Copenhagen Interpretation. In that interpretation, the gun will go off with a certain probability, depending on which way the quark is spinning. Eventually, by the laws of chance, the man will be killed, probably sooner rather than later. A growing number of theorists believe something else, however. They see both states (the particle is spinning clockwise and spinning anti-clockwise) as equally real, so there are two real outcomes. In one world, the man dies and in the other he lives. The experiment repeats, and the same split occurs. In one world there will exist a man who survives an indefinite number of rounds. In the other worlds, he is dead.

The difference between these alternative approaches is critical. The Copenhagen approach is to propose that the simultaneously existing states (for example, the quark that is spinning both clockwise and anti-clockwise simultaneously) exist in one world, and collapse into one of these states when observed. Meanwhile, the other states mysteriously disappear. The other approach is to posit that these simultaneously existing states are real states, and neither magically disappears, but branch off into different realities when observed. What is happening is that in one world, the particle is observed spinning clockwise (in the Quantum Suicide thought experiment, the man dies) and in the other world the particle is observed spinning the other way (and the man lives). Crucially, according to this interpretation both worlds are real. In other words, they are not notional states of one world but alternative realities. This is the so-called ‘Many Worlds Theory.’

Where is the burden of proof in trying to determine which interpretation of reality is correct? This depends on whether we take the one world that we can observe as the default position or the wave function of all possible states as represented in the mathematics of the wave function as the reality. Adherents to the Many Worlds position argue that the default is to go with what is described in the mathematics underpinning quantum theory – that the wave function represents all of reality. According to this argument, the minimal mathematical structure needed to make sense of quantum mechanics is the existence of many worlds which branch off, each of which contains an alternative reality. Moreover, these worlds are real. To say that our world, the one that we are observing, is the only real one, despite all the other possible worlds or measurement outcomes, has been likened to when we believed that the Earth was at the centre of the universe. There is no real justification, according to this interpretation, for saying that our branch of all possible states is the only real one, and that all other branches are non-existent or are ‘disappeared worlds.’ Put another way, the mathematics of quantum mechanics describes these different worlds. Nothing in the maths says that this world that we observe is more real than another world. So the burden of proof is on those who say it is. The viewpoint of the Copenhagen school is diametrically opposite. They argue that the hard evidence is of the world we are in, and the burden of proof is on those positing other worlds containing other branches of reality.

Depending on which default position we choose to adopt will determine whether we are adherents of the Copenhagen or the ‘Many Worlds’ schools.

For me personally, the logic of the argument points to the Many Worlds school. But to believe that they are right, and the Copenhagen school is wrong, seems kind of crazy, and totally counter-intuitive. In another world, of course, I’m probably saying the exact opposite.