# The God’s Dice Problem

Imagine a world created by an external Being based on the toss of a fair coin, or a roll of the dice. It’s a thought experiment sometimes called the ‘God’s Dice Problem.’ In the simplest version, Heads creates a version of the world in which one blue-bearded person is created. Let’s call that World A. Tails creates a version of the world in which a blue-bearded and a black-bearded person are created. Let’s call that World B.

You wake up in the dark in one of these worlds, but you don’t know which, and you can’t see what colour your beard is, though you do know the rule that created the world. What likelihood do you now assign to the hypothesis that the coin landed Tails and you have been born into world B?

This depends on what fundamental assumption you make about existence itself. One way of approaching this is to adopt what has been called the ‘Self Sampling Assumption’. This states that “you should reason as if you are randomly selected from everyone who exists in your reference class.” What do we mean by reference class? As an example, we can reasonably take it that in terms of our shared common existence a blue-bearded person and a black-bearded person belong in the same reference class, whereas a blue-bearded person and a black-coloured ball do not. Looked at another way, we need to ask, “What do you mean by you?” Before the lights came on, you don’t know what colour your beard is. It could be blue or it could be black, unless you mean that it’s part of the “essence” of who you are to have a blue-coloured beard. In other words, that there is no possible state of the world in which you had a black beard but otherwise would have been you.

Using this assumption, we see ourselves simply as a randomly selected bearded person among the reference class of blue and black bearded people. The coin could have landed Heads in which case we are in World A or if could have landed Tails in which case we are in World B. There is an equal chance that the coin landed Heads or Tails, so we should assign a credence of 1/2 to being in World A and similarly for World B. In World B the probability is 1/2 that we have a blue beard and 1/2 that we have a black beard.

The light is now turned on and we see that we are sporting a blue beard. What is the probability now that the coin landed Tails and we are in World B? Well, the probability we would sport a blue beard conditional on living in World A is 1, i.e. 100%. This is because we know that the one person who lives in World A has a blue beard. The conditional probability of having a blue beard in World B, in the other hand, is 1/2. The other inhabitant has a black beard. So there is twice the chance that we live on World A as World B conditional on finding out that we have a blue beard, i.e. a 2/3 chance the coin landed Heads and we live in World A.

Let’s now say you make a different assumption about existence itself. Your worldview in this alternative scenario is based in what has been termed the ‘Self-Indication Assumption.’ It can he stated like this. “Given the fact that you exist, you should (other things equal) favour a hypothesis according to which many observers exist over a hypothesis according to which few observers exist.”

According to this assumption, you note that there is one hypothesis (the World B hypothesis) according to which there are two observers (one blue-bearded and one black-bearded) and another hypothesis (the World A hypothesis) in which there is only one observer (who is blue-bearded). Since there are twice as many observers in World B as World A, then according to the Self-Indication Assumption, it is twice as likely (a 2/3 chance) that you live in World B as World A (a 1/3 chance). This is your best guess while the lights are out. When the lights are turned on, you find out that you have a blue beard. The new conditional probability you attach to living in World B is 1/2, as there is an equal chance that as a blue-bearded person you live in World B as World A.

So, under the Self-Sampling Assumption, your initial credence that you lived in World B was 1/2, which fell to 1/3 when you found out you had a blue beard. Under the Self-Indication Assumption, on the other hand, your initial credence of living on World B was 2/3, which fell to 1/2 when the lights came on.

So which is right and what are the wider implications?

Let us first consider the impact of changing the reference class of the ‘companion’ on World B. Instead of this being another bearded person, it is a black ball. In this case, what is the probability you should attribute to living on World B given the Self-Sampling Assumption? While the lights are out, you consider that there is a probability of 1/2 that the ball landed Tails, so the probability that you live on World B is 1/2.

When the lights are turned on, no new relevant information is added as you knew you were blue-bearded. There is one blue-bearded person on World A, therefore, and one on World B. So the chance that you are in World B is unchanged. It is 1/2.

Given the Self-Indication Assumption, the credence you should assign to being on World B given that your companion is a ball instead of a bearded person is now 1/2 as the number of relevant observers inhabiting World B is now one, the same as on World B. When the lights come on, you learn nothing new, and the chance the coin landed Tails and you are on World B stays unchanged at 1/2.

Unlike with the Self-Indicating Assumption (SIA), the Self-Sampling Assumption is dependent on the choice of reference class. The SIA is not dependent on the choice of reference class, as long as the reference class is large enough to contain all subjectively indistinguishable observers. If the reference class is large, SIA will make it more likely that the hypothesis is true but this is compensated by the much reduced probability that the observer will be any particular observer in the larger reference class.The choice of underlying assumption has implications elsewhere, most famously in regard to the Sleeping Beauty Problem, which I have addressed from a different angle in a separate blog.

In that problem, Sleeping Beauty is woken either once (on Monday) if a coin lands Heads or twice (on Monday and Tuesday) if it lands Tails. She knows these rules. Either way, she will only be aware of the immediate awakening (whether she is woken once or twice). The question is how Sleeping Beauty should answer if she is asked how likely it is the coin landed Tails when she is woken and asked.

If she adopts the Self-Sampling Assumption, she will give an answer of 1/2. The coin will have landed Tails with a probability of 1/2 and there is no other observer than her. Only if she is told that this is her second awakening will she change her credence that it landed Tails to 1 and that it landed Heads to 1.

If she adopts the Self-Indication Assumption, she has a different worldview in which there are three observation points. In fact, there are two prevalent propositions which have been called the Self-Indication Assumption, the first of which is stated above, i.e. “Given the fact that you exist, you should (other things equal) favour a hypothesis according to which many observers exist over a hypothesis according to which few observers exist.” The other can be stated thus: “All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.”

According to this assumption, stated either way, there is one hypothesis (the Heads hypothesis) according to which there is one observer opportunity (Monday awakening) and another hypothesis (the Tails hypothesis) in which there are two observer opportunities (the Monday awakening and the Tuesday awakening). Since there are twice as many observation opportunities in the Tails hypothesis according to the Self-Indication Assumption, it is twice as likely (a 2/3 chance) that the coin landed Tails as that it landed Heads (a 1/3 chance).

Looked at another way, if there is a coin toss that on heads will create one observer, while on tails it will create two, then we have three possible observers (observer on heads, first observer on tails and second observer on tails, each existing with equal probability, so the Self-Indication Assumption assigns a probability of 1/3 to each. Alternatively, this could be interpreted as saying there are two possible observers (first observer on either heads or tails, second observer on tails), the first existing with probability one and the second existing with probability 1/2. So the Self-Indication Assumption assigns a 2/3 probability to being the first observer and 1/3 to being the second observer, which is the same as before. Whichever way we prefer to look at it, the Self-Indication Assumption gives a 1/3 probability of heads and 2/3 probability of tails in the Sleeping Beauty Problem.

Depending on which Assumption we adopt, however, very different implications for our wider view of the world obtain.

One of the most well-known of these is the so-called Doomsday Argument, which I have explored elsewhere in my blog, ‘Is Humanity Doomed?’

The argument that I’ve been talking about goes by the name of The Doomsday Argument. It goes like this. Imagine for the sake of simplicity that there are only two possibilities: Extinction Soon and Extinction Late. In one, the human race goes extinct very soon, whereas in the other it spreads and multiplies through the Milky Way. In each case, we can write down the number of people who will ever have existed. Suppose that 100 billion people will have existed in the Extinction Soon case, as opposed to a 100 trillion people in the Extinction Late case. So now, say that we’re at the point in history where 100 billion people have lived. If we’re in the Extinction Late situation, then the great majority of the people who will ever have lived will be born after us. We’re in the very special position of being in the first 100 billion humans. Conditional in that, the probability of being in the Extinction Late case is overwhelmingly greater than of being in the Extinction Soon case. Using Bayes’ Theorem, which I explore in a separate blog, to perform the calculation, we can conclude for example that if we view the two cases as equally likely, then after applying the Doomsday reasoning, we’re almost certainly in the Extinction Soon case. For conditioned on being in the Extinction Late case (100 trillion people), we almost certainly would not be in the special position of being amongst the first 100 billion people.

We can look at it another way. If we view the entire history of the human race from a timeless perspective, then all else being equal we should be somewhere in the middle of that history. That is, the number of people who live after us should not be too much different from the number of people who lived before us. If the population is increasing exponentially, it seems to imply that humanity has a relatively short time left. Of course, you may have special information that indicates that you aren’t likely to be in the middle, but that would simply mitigate the problem, not remove it.

A modern form of the Doomsday holds that the resolution of the Doomsday Argument depends on how you resolve the Blue Beard or Sleeping Beauty Problems. If you give ⅓ as your answer to the puzzle, that corresponds to the Self-Sampling Assumption (SSA). If you make that assumption about how to apply Bayes’s Theorem, then it seems very difficult to escape the early Doom conclusion.

If you want to challenge that conclusion, then you can use the Self-Indication Assumption (SIA). That assumption says that you are more likely to exist in a world with more beings than one with less beings. You would say in the Doomsday Argument that if the “real” case is the Extinction Late case, then while it’s true that you are much less likely to be one of the first 100 billion people, it’s also true that because there are so many more people, you’re much more likely to exist in the first place. If you make both assumptions, then they cancel each other out, taking you back to your prior assessment of the probabilities of Extinction Soon and Extinction Late.

On this view, the fate of humanity, in probabilistic terms, depends on which Assumption we adopt.

One problem that has been flagged with the SSA assumption is that what applies to the first million people out of a possible trillion people applies just as well in principle to the first two people out of billions. This is known as the Adam and Eve problem. According to the SSA, the chance (without an effectively certain prior knowledge) that they are the first two people as opposed to two out of countless billions which (it is assumed) would be produced by their offspring is so vanishingly small that they could act and cause ourcomes as if it is impossible that they are the potential ancestors of billions. For example, they decide they will have a child unless Eve draws the Ace of Spades from a deck of cards. According to the logic of this thought experiment, then that that makes the card definitely the Ace of Spades. If it wasn’t, they would be two out of billions of people which is such a small probability as to be effectively precluded by the SSA, i.e. that you can reason as if you are simply randomly selected from all humans. In this way, their world would be one in which all sorts of strange coincidences, precognition, psychokinesis and backward causation could occur. There are defences thst have been proposed to save the SSA and the debate continues around these.

So what about the Self-Indication Assumption. Here the Presumptuous Philosopher Problem has been well flagged. Here it is. Imagine that scientists have narrowed the possibilities for a final theory of physics down to two equally likely possibilities. The main difference between them is that Theory 1 predicts that the universe contains a billion times more observers than Theory 2 does. There is a plan to build a state of the art particle accelerator to arbitrate between the two theories. Now, philosophers using the SIA come along and say that Theory 2 is almost certainly correct to within a billion-to-one confidence, since conditional on Theory 2 being correct, we’re a billion times more likely to exist in the first place. So we can save a billion pounds on building the particular accelerator. Indeed, even if we did, and it produced evidence that was a million times more consistent with Theory 1, we should still to with the view of the philosophers who are sticking to their assertion that Theory 2 is the correct one. Indeed, we should award the Nobel Prize in Physics to them for their “discovery.”

So we are left with a choice between the Self-Sampling Assumption which leads to the Doomsday Argument, and the Self-Indication Assumption which leads to Presumptuous Philosophers. And we need to choose a side.

For reasons I explore in a separate blog, I identify the answer to the Sleeping Beauty Problem as 1/3, which is consistent with an answer of 2/3 for the Blue Beard Problem. This is all consistent with the Self-Indication Assumption, but not the Self-Indication Assumption.

The debate continues.

Appendix

We can address this problem using Bayes’ Theorem.

We are seeking to calculate the probability, P(H I Blue) that the coin landed heads, given that you have a blue beard. In the problem as posed, there are two people, and you are not more likely, a priori, to be either the blue-bearded or the black-bearded person. Now the probability, with a fair coin, of throwing heads as opposed to tails is 1/2. Adopting the Self-Sampling Assumption, we sample a person within their world at random.

First, what is the probability that you have a blue beard, P(Blue).

This is given by: P (Blue I Heads). P (Heads) + P (Blue I Tails) . P (Tails) = 1 . 1/2 + 1/2 . 1/2 = 3/4

Since if the coin lands Heads, P (Blue) = 1; P (Heads) = 1/2.

If the coin lands Tails, P (Blue) = 1/2; P (Tails) = 1/2.

By Bayes’ Theorem, P (Tails I Blue) = P (Blue I Tails) . P (Tails) / P (Blue) = 1/2 . 1/2 / (3/4) = 1/3

So the probability that you have a blue beard if the coin landed tails (World B) is 1/3.

What assumption needs to be made so that the probability of having a blue beard in World 2 is 1/2.

You could assume that whenever you exist, you have a blue beard. In that case, P (Blue I Heads) = 1. P (Blue B I Tails) = 1.

Now, P (Blue) = P (Blue I Heads). P (Heads) + P (Blue I Tails) . P (Tails) = 1 . 1/2 + 1 . 1/2 = 1

Now, by Bayes’ Theorem, P (Tails I Blue) = P (Blue I Tails) . P (Tails) / P (Blue) = 1 . 1/2 / 1 = 1/2

Is there a way, however, to do so without a prior commitment about beard colour?

One approach is to note that there are twice as many people in the Tails world as in the Heads world in the first place. This is known as the Self-Indication Assumption, So you could argue that you are a priori twice as likely to exist in a world with twice as many people. In a world with more people, you are simply more likely to be picked at all. Put another way, your own existence is a piece of evidence that you should condition upon.

Now, P (Blue) = P(Blue I Heads world) . P (Heads world) + P (Blue I Tails world) . P (Tails world) = 1. 1/3 + 1/2 . 2/3 = 1/3 + 1/3 = 2/3

So using Bayes’ Theorem, P (Tails world I Blue) = P (Blue I Tails world) . P (Tails world) / P (Blue) = 1/2 . (2/3) / 2/3 = 1/2.