leidenpsychologyblog

Putting the numbers to the COVID test

Putting the numbers to the COVID test

A proper assessment of test results involves the consideration of the test’s reliability in light of the prevalence (risk) of Covid-19 in the population. This leads to some surprising insights.

Numbers
Humans are not great with numbers (author included). Most of us can count, but we struggle to interpret those counts in the right way. A well-documented example from a few years ago illustrates the point: a study made headlines reporting that eating fifty grams of red meat (i.e., beef, pork or lamb) per day led to an 18% increase in bowel cancer risk. The message conveyed in the media storm was clear: ‘Red meat will kill you’. But this turned out to represent a common misinterpretation, where people judge a relative increase as reflecting an overall risk. Eating red meat does not yield an 18% risk of developing bowel cancer. Rather, the increase in risk should be interpreted in light of the overall risk, which for bowel cancer is around 5%. Statistician David Spiegelhalter, who carefully scrutinized the ‘sausage wars’, points out that the actual increase is one percent (18% of 5%), giving an overall risk of roughly 6%. There may be other good reasons to cut down on meat consumption (e.g., environmental concerns or animal welfare), but in terms of bowel cancer risks, red meat consumption increases the risk not by eighteen in a hundred, but by one in a hundred.

Sensitivity and Specificity
What the above example illustrates is not just relevant for dietary guidelines, but also for the current COVID-19 pandemic, namely when considering whether, and to what extent, people should be tested for the virus. As with any test, COVID test results can be more or less precise, due to characteristics of the test itself, the testing environment, and, of course, human error. This means that occasionally a test will yield a negative outcome when the person tested actually does have the virus, or a positive outcome when in reality the person is not infected. These are called false negative and false positive outcomes, and they are captured in a test’s sensitivity and specificity.

Prevalence
So what does this have to do with the discussion about the health risks of eating red meat? As in the above example, a proper assessment of test results involves the consideration of the test’s sensitivity and specificity in light of the prevalence (risk) of COVID-19 in the population. In reality one cannot know the exact prevalence, so the reported infection rate is more an approximation than an objective fact (although with more data available, scientists are closing in). Still, it is safe to assume the prevalence is higher among people who report symptoms (i.e., those who obtain a confirmatory test) than in a random sample (for example for screening).

The real risk
The following illustration shows how the accuracy of the test could vary depending on COVID-19 prevalence (also see Table). Suppose the risk of contracting COVID-19 prior to testing is five percent. Now, assume the sensitivity of a test is 80% (the proportion of people who actually have COVID-19 and test positive), and the specificity is 95% (the proportion of people who do not have COVID-19 and test negative). What is the chance that someone is infected if their test result is positive? Most of us would assume this to be close to 80%. But in reality, it is lower. This is because we should also consider the base rate prevalence of COVID-19 in the sample. If five out of a hundred people have COVID (and 95 don’t), four tests are true positives (.80*5), and five are false positives (.05*95). This means the chance of being infected is 5/(4+5) = 44%, against a false positive risk of 56%. Note that the risk of false positives versus false negatives changes as the prevalence of COVID-19 changes, such that with the same sensitivity and specificity, if the prevalence prior to testing is 80%, the risk of a false positive decreases to 1/(64+1) = 2%, whereas the risk of a false negative increases to 16/(16+19) = 46%!

The implications
Does all this mean we shouldn’t be testing? Not necessarily. But both false negatives and false positives have important implications: they may lead to people feeling unjustly safe, or being subjected to unnecessary restrictions, so it is important to grasp what affects a test’s reliability. A sound approach, if time and money permits, is to seek convergence across tests and repeat testing. If the chance of obtaining one false positive test result is 56%, as in the above example, a second test already reduces this risk to .56*.56 = 31%, and so on. So, in sum, there’s good reason to put the numbers to the test.

Number of actual COVID-19 cases per 100 tested

True positives

False positives

False negatives

True negatives

5

4 (44%)

5 (56%)

1 (2%)

90 (98%)

10

8

4

2

86

20

16

4

4

76

40

32

3

8

57

60

48

2

12

38

80

64 (98%)

1 (2%)

16 (46%)

19 (54%)

Number of positive and negative test outcomes for a test with a sensitivity of 80% and a specificity of 95%. Between brackets: chance of a true/false outcome given a positive/negative test.

Related