Rethinking Probability
The Logic of Uncertainty
There’s a quiet disagreement sitting at the heart of modern science, and it has been there for over a century. We use probability everywhere, to build quantum computers, to price insurance policies, to model pandemics, to guide decisions under uncertainty. It is the invisible mathematics of almost everything we do. And yet, if you gather a physicist, a statistician, and a philosopher in a room and ask them what probability is, you will get three different answers. Not fringe answers. Not confused ones. Three serious, well-developed, mutually incompatible accounts. Each captures something important. None quite captures the whole. Understanding why turns out to be one of the more illuminating philosophical journeys available to anyone willing to take it.
The First Answer: Probability as Frequency
Start with the most intuitive answer, the one that underlies most of science: probability as frequency. A 50% chance of heads just means that if you tossed a coin a lot of times, roughly half the results would be heads. Probability, on this view, is the long-run average behaviour of repeatable events. This is the frequentist theory, and it has the considerable virtue of being empirically grounded. You can, in principle, go out and measure it. The problem is that most of the things we most want to reason about are not like coin flips. What is the frequency-based probability that a particular volcano erupts within the next decade, or that a particular patient survives a particular operation, or that a specific accused person committed a specific crime? These are one-off events. There is no infinite series of repetitions to appeal to, and any attempt to construct one, to define a reference class of “similar” events, quickly becomes arbitrary. Even in simple cases, the view struggles. Flip a fair coin three times and get heads twice: the observed frequency is 66%, but the probability was surely 50%. A small sample can mislead, yet on the frequentist theory the sample is supposed to define the probability. Frequency tells us something important, but it cannot be the whole story.
The Second Answer: Probability as Propensity
Dissatisfied with frequency, some philosophers and scientists turned to propensity. On this view, probability is a physical tendency inherent in the object or system itself. The symmetry and weight distribution of a fair coin give it a genuine, physical 50% push towards heads on any given toss. Probability is not a summary of what has happened; it is a disposition of what is there.
This has a certain elegance. It explains why we feel confident assigning a 50% probability to a coin we have never tossed, and it roots probability in the causal structure of the world rather than in our counting of outcomes. But it runs into a serious problem when we try to reason backwards. Suppose a fair coin has been tossed three times, and someone tells you that exactly two of the results were heads. What is the probability that the first time was heads? The mathematics is clear: there are three equally likely ways to get two heads in three coin tosses — HHT, HTH, THH — and two of them begin with heads, so the answer is about 66%. But the physical propensity of that coin on the first toss was 50%, and nothing about the coin changed when you acquired new information. Propensity theory is essentially forward-looking. It describes causal push, and it becomes awkward or incoherent when applied to inference about the past. Yet that kind of backward reasoning is exactly what detectives, historians, doctors, and scientists do constantly.
The Third Answer: Probability as Belief
Frustrated with theories that root probability in the world, a third tradition located it instead in the mind. Probability, on the subjectivist view, is a degree of belief, a measure of how confident you are in a proposition. The great advantage of this view is its flexibility. You can assign probabilities to anything: past events, unique occurrences, mathematical conjectures. And the formal apparatus of Bayesian updating gives a rational procedure for revising those beliefs in light of new evidence. But there is a cost. If probability is just belief, what makes one belief right and another wrong? If someone sincerely feels 90% confident that a fair coin will land heads, the strict subjectivist cannot say they are mistaken, only that they are likely to lose money in the long run. But surely the problem is not merely pragmatic. The probability of a fair coin landing heads is not whatever anyone happens to believe; it is 50%, and that seems like an objective fact about the relationship between the coin and the world, not a psychological report. Subjectivism captures something essential about uncertainty, but it risks detaching probability from any objective standard.
A Different Idea: Probability as Logic
What all three theories are groping towards, but none quite reaches, is this: probability is fundamentally a relationship between evidence and conclusion. Looked at this way, probability is not a property of objects, nor a frequency in the world, nor a feeling in anyone’s head. It is instead an objective, logical measure of how strongly a given body of evidence supports a given proposition. This idea has a distinguished pedigree. It was Keynes who first developed it systematically, in his Treatise on Probability in 1921, and it has been refined and defended by a number of philosophers since. The most searching recent treatment comes from Nevin Climenhaga, whose work over the past decade has given the logical interpretation a more rigorous and complete foundation than it has previously enjoyed. Think of it this way. In classical deductive logic, if your premises fully entail your conclusion, you have certainty, you have complete support. Probability extends this idea into a more realistic setting. Most of the time, our evidence does not guarantee our conclusions — it only supports them, to a greater or lesser degree. Probability measures exactly how strong that support is. This is not a matter of opinion. Just as 2+2=4 is not what any particular person believes but what the logical structure of arithmetic compels, the degree to which a body of evidence supports a conclusion is not up for negotiation. It is an objective, mind-independent relation between propositions.
How the Numbers Are Determined
But saying that probability is a logical relation between evidence and conclusion still leaves a pressing question: how do we work out what the actual probabilities are? This is where Climenhaga’s structural contribution becomes particularly useful. He distinguishes between what he calls basic and non-basic probabilities. Some probabilities are foundational, in that they cannot be derived from other probabilities but must be read off directly from the explanatory relationships between propositions. These basic probabilities are the probabilities of outcomes given their direct explanations: the probability of a coin landing heads given the physical facts about the coin and the toss, or the probability of a radioactive atom decaying within a given period given the fundamental physics of that atom. All other probabilities — the complex, derived ones we care about in science, medicine, and law — are then calculated from these foundations using the standard mathematical rules of probability. To make this tractable, Climenhaga draws on the framework of Bayesian networks: directed graphs in which nodes represent variables and arrows represent explanatory relationships. The structure of such a network makes explicit which variables directly explain which others, and allows probabilities to flow systematically through the network, downward from cause to effect, and upward from effect to cause. This unifies two kinds of inference that had traditionally seemed quite different. The meteorologist predicting tomorrow’s storm and the detective reasoning back from a clue to a suspect are both traversing the same logical structure, just in opposite directions. As for the values of the basic probabilities themselves, the question of where the numbers actually come from, Climenhaga considers the principle of indifference.
The Principle of Indifference
This is the classical idea that, in the absence of any relevant evidence distinguishing between outcomes, we should assign them equal probability. A fair coin gets 50% on each side not because we have watched it land a million times, but because we have no reason grounded in its physical constitution to favour one outcome over the other. Now, the principle of indifference has a troubled history — it is notoriously sensitive to how you describe the outcomes — but Climenhaga argues that it can be applied coherently once we have correctly identified which probabilities are basic and which are derived. Asking the right structural question first makes the substantive answer more tractable. This framing dissolves several of the puzzles that plagued the older theories. The coin flipped only three times is no problem: given the evidence of its symmetry and the laws of physics, the basic probability of heads is 50%, regardless of what the small sample happened to show. Backward inference is equally natural: when you learn that two of three coin tosses were heads, your evidence changes, and so does the degree to which it supports different hypotheses about what happened earlier. Nothing in the physical world has changed, but your evidential relationship to the event has. And there is no subjectivism, because the logical relations between propositions are objective. Your gut feeling cannot alter them any more than it can alter the sum of two and two.
The Hard Edges
None of this makes probability philosophically tidy. There are genuine difficulties at the boundaries, and it is worth being honest about them. One concerns ultimate explanations. If probability is always calculated relative to some background evidence, what happens when we try to reason about the origin of the universe, or the laws of physics themselves? There is no deeper evidential base to appeal to. Questions like these push us beyond probability into metaphysics, and anyone who tells you they have been cleanly resolved is overstating things. Another concerns necessary truths. Mathematical theorems are either true or false — there is no randomness in the underlying reality. Yet we clearly do reason under uncertainty about mathematics, and it seems perfectly sensible to say that some unproved conjecture is probably true. Making sense of this within the logical framework requires care: the uncertainty lies not in the world, but in the relationship between our evidence and a truth that is already fixed.mThese are genuine difficulties, not embarrassments. Every foundational theory in philosophy lives alongside unsolved problems; the question is whether the framework is illuminating enough to be worth working within.
The Deeper Lesson
The deepest thing the logical view teaches is that uncertainty is not a defect in our thinking. It is a feature of our position in the world. We never stand outside the evidence: we reason from within it. And probability is the structure that connects what we know to what we are entitled to conclude. Not a guess, not a feeling, but a constraint. Ultimately, it is the logic of how far the evidence really goes.