When Should We Believe the Diagnosis?
Exploring the World of False Positives
A version of this article appears in Twisted Logic: Puzzles, Paradoxes, and Big Questions. By Leighton Vaughan Williams. Chapman & Hall/CRC Press. 2024.
THE FLU TEST SCENARIO: SETTING THE STAGE
Imagine this scenario: you twist your knee in a skateboarding mishap and decide to visit your doctor to have it looked at, just to be on the safe side. At the surgery, they run a routine test for a flu virus on all their patients, based on the estimate that about 1 out of every 100 patients visiting them will have the virus. This flu test is known to be pretty accurate—it gets the diagnosis right 99 out of 100 times. In other words, it correctly identifies 99% of people who are sick as sick, and equally importantly, it correctly clears 99% of those who don’t have the flu virus.
Now, you take the test, and to your surprise, it comes back positive. What does this mean for you, exactly? You dropped in to have your knee looked at, and now it seems you have the flu.
To summarise the situation, imagine you’ve twisted your knee and, while at the doctor’s office, you’re given a routine flu test. The test is 99% accurate and is positive. But what are the actual chances that you have the flu? This scenario is perfect for exploring Bayes’ theorem and understanding false positives.
BREAKING DOWN THE INVERSE FALLACY
Here, we step into the tricky territory of probabilities, a place where common sense can often mislead us. So, what is the chance that you do have the virus?
The intuitive answer is 99%, as the test is 99% accurate. But is that right?
The information we are given relates to the probability of testing positive given that you have the virus. What we want to know, however, is the probability of having the virus given that you test positive. This is a crucial difference.
Common intuition conflates these two probabilities, but they are very different. If the test is 99% accurate, this means that 99% of those with the virus test positive. But this is NOT the same thing as saying that 99% of patients who test positive have the virus. This is an example of the ‘Inverse Fallacy’ or ‘Prosecutor’s Fallacy’. In fact, those two probabilities can diverge markedly.
To summarise, common sense might suggest a 99% chance of having the flu, aligning with the test’s accuracy. However, this confuses the probability of testing positive when having the flu with the probability of having the flu when testing positive—a common mistake known as the ‘Inverse Fallacy’.
So what is the probability you have the virus if you test positive, given that the test is 99% accurate? To answer this, we can use Bayes’ theorem.
APPLYING BAYES’ THEOREM
Bayes’ theorem, as we have seen, uses three values:
Your initial chance of having the flu before taking the test, which in our scenario was estimated to be 1 out of 100 or 0.01.
The likelihood of the test showing a positive result if you have the flu, which we know to be 99% or 0.99 based on the accuracy of the test.
The likelihood of the test showing a positive result if you don’t have the flu, which is 1% or 0.01, again based on the accuracy of the test.
When we plug these into Bayesian formula, we end up with a surprising result. If you test positive for the flu, despite the test being 99% accurate, there’s actually only a 50% chance that you really have it.
In other words, to find the real probability of having the flu, we consider:
Prior Probability: Your initial chance of having the flu is 1% (1 in 100).
True Positive Rate: The test correctly identifies the flu 99% of the time.
False Positive Rate: The test incorrectly indicates flu in healthy individuals 1% of the time.
The formula is expressed as follows:
ab/[ab + c (1 − a)]
where
a is the prior probability, i.e. 0.01,
b is 0.99.
c is 0.01.
Using Bayes’ theorem, we find a surprising result: even with a 99% accurate test, there’s only a 50% chance you have the flu after a positive result.
GRAPPLING WITH PROBABILITIES
The result can seem counterintuitive, and it’s worth taking a moment to understand why that is. The key is to remember that the flu is a relatively rare occurrence—only 1 in 100 patients have it. While the test may be 99% accurate, we have to take into account the relative rarity of the disease in those who are tested. The chance is just 1 in 100. The chance of having the flu before taking the test and the chance of the test making an error are both, therefore, 1 in 100. These two probabilities are the same, and so, when you test positive, the chance that you have the flu is actually just 1 in 2.
It is basically a competition between how rare the virus is and how rarely the test is wrong. In this case, there is a 1 in 100 chance that you have the virus before taking the test, and the test is wrong one time in 100. These two probabilities are equal, so the chance that you have the virus when testing positive is 1 in 2, despite the test being 99% accurate.
Put another way, the counterintuitive outcome arises because the flu is relatively rare (1 in 100), balancing against the test’s accuracy.
THE IMPLICATION OF SYMPTOMS AND PRIOR PROBABILITIES
This calculation changes if we add in some more information. Let’s say you were already feeling unwell with flu-like symptoms before the test. In this case, your doctor might think you’re more likely to have the flu than the average patient, and this would increase your ‘prior probability’. Consequently, a positive test in this context would be more indicative of actually having the flu, as it aligns with both the symptoms and the test result.
In this way, Bayes’ theorem incorporates both the statistical likelihood and real-world information. It’s a powerful tool to help us understand probabilities better and to make informed decisions. The bottom line, though, is that while a positive test result can be misinterpreted, it should, especially in conjunction with symptoms, be taken seriously.
The Role of Symptoms in Adjusting Probabilities
If you had flu-like symptoms before the test, this would increase your ‘prior probability’. Consequently, a positive test in this context would be more indicative of actually having the flu, as it aligns with both the symptoms and the test result.
CONCLUSION: THE BROAD APPLICATION OF BAYESIAN THINKING
While we’ve used the example of a flu test, the principles of Bayes’ theorem apply beyond the doctor’s door. From the courtroom to the boardroom, from deciding if an email is spam to weighing up the reliability of a rumour, we often need to update our beliefs in the face of new evidence. Remember, a single piece of evidence should always be weighed against the broader context and initial probabilities.
