# The Doomsday argument – in a nutshell.

Can we demonstrate, purely from the way that probability works, that the human race is likely to go extinct in the relatively foreseeable future, regardless of what humanity might do to try and prevent it? Yes, according to the so-called Doomsday argument, and this argument derived from basic probability theory has never been refuted.

Here’s how the argument goes. Let’s say you want to estimate how many tanks the enemy has to deploy against you, and you know that the tanks have been manufactured with serial numbers starting at 1 and ascending from there. Now let’s say you identify the serial numbers on five random tanks and they all have serial numbers under 10. Even an intuitive understanding of the workings of probability would lead you to conclude that the number of tanks possessed by the enemy is pretty small. On the other hand, if they are identified as serial numbers 2524, 7866, 5285, 3609 and 8,009, you are unlikely to be way out if you estimate the enemy has more than 10,000 of them.

Let’s say that you only have one serial number to work with, and that it shows the number 18. On the basis of just this information, you would do well to estimate that the total number of enemy tanks is more likely to be 36 than 360, and even more likely than the total tank account being 36,000.

This way of thinking is an aspect of what is known as the mediocrity principle, which is the notion that an item that is drawn at random from one of several sets or categories is more likely to come from the most numerous category than any of the less numerous categories.

The principle has been used to suggest that, given the existence of life on Earth, life typically exists on Earth-like planets throughout the universe. The idea is to assume mediocrity rather than starting with the assumption that a phenomenon is special, privileged, exceptional or better. As such, it stands in contrast to the anthropic principle, which is the idea that the presence of an intelligent observer (homo sapiens) limits the circumstances to those under which intelligent life can be observed to exist, no matter how improbable. Linked to this is the Copernican principle, the idea in cosmology that we are not privileged or special observers of the universe. It is based on the observation of Nicolaus Copernicus in the 16^{th} century that the Earth is not at the centre of the universe, the Copernican principle invoking the idea that the Earth is nowhere special at all.

It was a principle notably used by astrophysicist John Richard Gott when arriving at the Berlin Wall. He asked himself whether, in the absence of other knowledge, there was any reason to believe that the moment in time that he came upon the Wall was likely to be any special time in the lifetime of the wall. He decided that there was not and that because any moment was equally likely, his best estimate was that there was as much time before he met the wall as there would be for the wall after he met it. In other words, his best guess as to how long the wall would last was exactly as long as it has already been in existence. That was eight years. This form of reasoning was termed the ‘Copernican principle’ by Gott.

It is related to the ‘Lindy effect’, the name of which is derived from a New York delicatessen, famous for its cheesecakes, which was frequented by actors playing in Broadway shows. The Lindy effect was the observation that a Broadway show could expect to last for a further period equal to the length of time it had already been playing. So a show that had been on Broadway for three years could, as a best guess, be expected to last another three years before closing. More generally, the Lindy effect has come to represent the idea that the life expectancy going forward of a non-perishable thing such as a technology or an idea is proportional to its current period of existence, so that every additional period of survival implies a greater future life expectancy.

To return to the Copernican principle, in Bayesian terms it can be viewed as Bayes’ Rule with an uninformative prior. When we want to estimate how long something will last, in the absence of other knowledge, this principle suggests assuming we are at the mid-point of the timeline.

Imagine, in another scenario, that you are made aware that a selected box of numbered balls contains either ten balls (numbered from 1 to 10) or ten thousand balls (numbered 1 to 10,000), and you are asked to guess which. Before you do so, one is drawn for you. It reveals the number seven. That would be a 1 in 10 chance if the box contains ten balls, but a 1 in 10,000 chance if it contained 10,000 balls. You would he right on the basis of this information to conclude that the box very probably contains ten balls, not ten thousand.

Let’s look at the same argument another way. As a thought experiment, imagine a world made up of 100 pods. In each pod, there is one human. Ninety of the pods are painted black on the exterior and the other ten are white. This is known information, available to you and all the other humans. You are one of these people and you are asked to estimate the likelihood that you are inside a black pod. A reasonable way to go about this is to adopt what philosophers call the Self-Sampling Assumption. As explored, it goes like this. “All other things equal, an observer should reason as if they are randomly selected from the set of all existing observers in their reference class (in this case, humans in pods).” Since nine in ten of all people are in the black pods, and since you don’t have any other relevant information, it seems clear that you should estimate the probability that you are in a black pod as 10 per cent. A good way of testing the good sense of this reasoning is to ask what would happen if everyone bet this way. Well, 90 per cent of the wagers would win and ten per cent would lose. In contrast, assume that the people ignore the self-sampling assumption and adopt the assumption that (since they don’t know which) they are equally likely to be in a black as a white pod. In this case, they might as well toss a coin and bet on the outcome. If they do so, only 50 per cent (as opposed to 90 per cent) will win the bet. As demonstrated, it seems clearly rational here to accept the self-sampling assumption.

Now let’s make the pod example more similar to the tank and ‘balls in the box’ cases. We keep the hundred pods but this time they are distinguished by being numbered from 1 to 100, painted on the exterior of the pods. Then a fair coin is tossed by an external Being. If the coin lands on heads, one person is created in each of the hundred pods. If the coin lands tails, then people are only created in pods 1 to 10. Now, you are in one of the pods and must estimate whether there are ten or a hundred people created in total. Since the number was determined by the toss of a fair coin, and since you don’t know the outcome of the coin toss, and have no access to any other relevant information, it could be argued that you should believe there is a probability of 1/2 that it landed on heads and thus a probability of 1/2 that there are a hundred people. You can, however, use the self-sampling assumption to assess the conditional probability of a number between 1 and 10 being painted on your cubicle given how the coin landed. For example, conditional upon it landing on heads, the probability that the number on your pod is between 1 and 10 is 1/10, since one person in ten will find themselves in these pods. Conditional on tails, the probability that you are in number 1 through 10 is 1, since everybody created (ten of them) must be in one those pods.

Suppose now that you open the door and discover that you are in pod number 6. Again you are asked, how did the coin land? Now you deduce that the probability is somewhat greater than 1/2 that it landed on tails.

The final step is to transpose this reasoning to our actual situation here on Earth. Let’s assume for simplicity there are just two possibilities. Early extinction: the human race goes extinct in the next century and the total number of humans that will have existed is, say, 200 billion. Late extinction: the human race survives the next century, spreads through the Milky Way and the total number of humans is 200,000 billion. Corresponding to the prior probability of the coin landing heads or tails, we now have some prior probability of early or late extinction, based on current existential threats such as nuclear annihilation. Finally, corresponding to finding you are in pod number 6 we have the fact that you find that your birth rank is about 108 billion (that’s approximately how many humans have lived before you). Just as finding you are in pod 6 increased the probability of the coin having landed tails, so finding you are human number 108 billion (about half way to 200 billion) gives you much more reason, whatever the prior probability of extinction based on other factors, to think that Early Extinction (200 billion humans) is much more probable than Late Extinction (200,000 billion humans).

Essentially, then, the Doomsday Argument transfers the logic of the laws of probability to the survival of the human race. To date there have been about 110 billion humans on earth, about 7 per cent of whom are alive today. At least these are indicative estimates. On the same basis as the tank and the balls in the box and the pods problems, a reasonable estimate, other things equal, is that we are about half way along the timeline. Projecting demographic trends forward, this makes our best estimate of the termination of the timeline of the human race as we know it to be within this millennium.

That is the Doomsday argument.

**References and Links**

Nick Bostrom (2002). A Primer on the Doomsday Argument. http://www.anthropic-principle.com/?q=anthropic_principle/doomsday_argument

Doomsday Argument. Lesswrongwiki. https://wiki.lesswrong.com/wiki/Doomsday_argument

Doomsday Argument. RationalWiki. https://rationalwiki.org/wiki/Doomsday_argument

Doomsday Argument. Wikipedia. https://en.wikipedia.org/wiki/Doomsday_argument