A tutorial on contingency tables

Many results in science and medicine can be compactly represented as a table containing the co-occurrence frequencies of two or more discrete random variables. This data structure is called the contingency table (a name suggested by Karl Pearson in 1904). This tutorial will cover descriptive and inferential statistics that can be used on the simplest form of contingency table, in which both the outcomes are binomial (and thus the table is 2×2).

Let’s begin by taking a look at a real-world example: graduate school admissions a single department at UC Berkeley in 1973. (This is part of a famous real-world example which may be of interest to the reader.) Our dependent variable is “admitted” or “rejected”, and we’ll use applicant gender as an independent variable.

Admitted Rejected
Male 120 205
Female 202 391

I can scarcely look at this table without seeing the inevitable question: are admissions in this department gender biased?

Odds ratios

37% (= 120 / 325) of the male applicants who applied were admitted, and 34% (= 202 / 593) of the female applicants were. Is that 3% difference a meaningful one? It is tempting to focus on “3%”, but the researcher should absolutely avoid this temptation. The magnitude of the difference between admission rates in the two groups (defined by the independent variable) is very sensitive to the base rate, in this case the overall admission rate. Intuitively, if 2% of males were admitted and only 1% of females, we would definitely consider the possibility that admissions are gender-biased: we would estimate that males are twice as likely to be admitted as females But we would be much less likely to say there is an admissions bias if those percentages were 98% and 99%. Yet, in both scenarios the admission rates differ by exactly 1%.

A better way to quantify the effect of gender on admissions—a method that is insensitive to the overall admission rate—is the odds ratio. This name is practically the definition if you are familiar with the notion odds. The odds of some event occurring with probability P is simply P / (1 – P). In our example above, the odds of admission for a male applicant is 0.5854, and is 0.5166 for a female applicant. The ratio of these two is 1.1334 (= 0.5854 / 0.5166). As this ratio is greater than one, we say that maleness was associated with admission. This is not enough to establish bias: it simply means that males were somewhat more likely to be admitted than females.

Tests for association

We can now return to the original question: is this table likely to have arisen if there is in fact no gender bias in admissions? Pearson’s chi-squared test estimates the probability of the observed contingency table under the null hypothesis that there is no association between and y; see here for a worked example. We reject the null hypothesis that there is no association when this probability is sufficiently small (often at P < .05). For this table, χ2 = 0.6332, and the probability of the data under the null hypothesis (no association between gender and admission rate) is P(χ2) = 0.4262. So, we’d probably say the observed difference in admission rates was not sufficient to establish that females were less likely to be admitted than males in this department.

The chi-squared test for contingency tables depends on an approximation which is asymptotically valid, but inadequate for small samples; a popular (albeit arbitrary) rule of thumb is that a sample is “small” if any of the four cells has less than 5 observations. The best solution is to use an alternative, Fisher’s exact test; as the name suggests, it provides an exact p-value. Rather than working with the χ2 statistic, the null hypothesis for the Fisher test is that the true (population) odds ratio is equal to 1.

Accuracy, precision, and recall

In the above example, we attempted to measure the association of two random variables which represented different constructs (i.e., admission status and gender). Contingency tables can also be used to look at random variables which are in some sense imperfect measures of the same underlying construct. In a machine learning context, one variable might represent the predictions of a binary classifier, and the other represents the labels taken from the “oracle”, the trusted data source the classifier is intended to approximate. Such tables are sometimes known as confusion matrices. Convention holds that one of the two outcomes should be labeled (arbitrarily if necessary) as “hit” and the other as “miss”—often the “hit” is the one which requires further attention whereas the “miss” can be ignored—and the the following labels be assigned to the four cells of the confusion matrix:

Prediction / Oracle Hit Miss
Hit True positive (TP) False positive (FP)
Miss False negative (FN) True negative (TN)

The oracle labels are on the row, corresponding to the dependent variable—admission status—in the Berkeley example, and the prediction labels are on the column, corresponding to gender.

When both variables of the 2×2 table measure the same construct, we start with the assumption that the two random variables are associated, and instead measure agreement. The simplest measure of agreement is accuracy, which is the probability that an observation will be correctly classified.

accuracy = (TP + TN) / (TP + FP + FN + TN)

Accuracy is not always the most informative measure, for the same reason that differences in probabilities were not informative above: accuracy neglects the base rate (or prevalence). Consider, for example, the plight of the much-maligned Transportation Safety Administration (TSA). Very, very few airline passengers attempt to commit a terrorist attack during their flight. Since 9/11, there have only been two documented attempted terrorist attacks by passengers on commercial airlines, one by Richard Reid (the “shoe bomber”) and one by Umar Farouk Abdulmutallab (the “underwear bomber”), and in both cases, these attempts were thwarted by some combination of the attentive citizenry and incompetent attacker, not by the TSA’s security theater. There are approximately 650,000,000 passenger-flights per year on US flights, so according to my back-of-envelope calculation, there have been around 7 billion passenger-flights since 9/11. In the meantime, the TSA could have achieved sky-high accuracy simply by making no arrests at all. (I have, of course, ignored the possibility that security theater serves as a deterrent to terrorist activity.) A corollary is that the TSA faces what is called the false positive paradox: a false positive (say, detaining a law-abiding citizen) is much more likely than a true positive (catching a terrorist). The TSA isn’t alone: a famous paper found that few physicians used the base rate (“prevalence”) when estimating the likelihood that a patient has a particular disease, given that they had a positive result on a test.

To better account for the role of base rate, we can break accuracy down into its constituent parts. The best known of these measures is precision (or positive predictive value), which is defined as the probability that a predicted hit is correct.

precision = TP / (TP + FP)

Precision isn’t completely free of the base rate problem, however; it fails to penalize false negatives. For this, we turn to recall (or sensitivity, or true positive rate), which is the probability that a true hit is correctly discovered.

recall = TP / (TP + FN)

It is difficult to improve precision without sacrificing recall, or vis versa. Consider, for example, an information retrieval (IR) application, which takes natural language queries as input and attempts to return all documents relevant for the query. Internally, the IR system ranks all documents for relevance to the query, then returns the top n. A document which is relevant and returned by the system is a true positive, a document which is irrelevant but returned by the system is a false positive, and a document which is relevant but not returned by the system is a false negative (we’ll ignore true negatives for the time being). With this system, we can achieve perfect recall by returning all documents, no matter what the query is, though the precision will be very poor. It is often helpful for n, the number of documents retrieved, to vary as a function of query; in a sports news database, for instance, there are simply more documents about the New York Yankees than about the congenitally mediocre St. Louis Blues. We can maximize precision by reducing the average for queries, but this will also reduce recall, since there will be more false negatives.

To quantify the tradeoff between precision and recall, it is conventional to use the harmonic mean of precision and recall.

F1 = (2 · precision · recall) / (precision + recall)

This measure is known also known as the F-score (or F-measure), though it is properly called F1, since an F-score need not weigh precision and recall equally. In many applications, however the real-world costs of false positives and false negatives are not equivalent. In the context of screening for serious illness, a false positive would simply lead to further testing, whereas a false negative could be fatal; consequently, recall is more important then precision. On the other hand, when the resources necessary to derive value from true positives are limited (such as in fraud detection), false negatives are considered more acceptable than false positives, and so precision is ranked above recall.

Another thing to note about F1: the harmonic mean of two positive numbers is always closer to the smaller of the two. So, if you want to maximize F1, the best place to start is to increase whichever of the two terms (precision and recall) is smaller.

To make this all a bit more concrete, consider the following 2×2 table.

Prediction / Oracle Hit Miss
Hit 10 2
Miss 5 20

We can see that false negatives are somewhat more common than false positives, so we could have predicted that precision (0.8333) would be somewhat greater than recall (0.6667), and that F1 would be somewhat closer to the latter (0.7407).

This of course does not exhaust the space of possible summary statistics of a 2×2 confusion matrix: see Wikipedia for more.

Response bias

It is sometimes useful to directly quantify predictor bias, which can be thought of as a signed measure representing the degree to which prediction system’s base rate differs from the true base rate. A positive bias indicates that the system predicts “hit” more often than it should were it hewing to the true base rate, and a negative bias indicates that “hit” is guessed less often than the true base rate would suggest. One conventional measure of bias is Bd”, which is a function of recall and false positive rate (FAR), defined as follows.

FAR = FP / (TN + FP)

Bd has a rather unwieldy formula; it is

Bd = [(recall · (1 – recall)) – (FAR · (1 – FAR))] / [(recall · (1 – recall)) + (FAR · (1 – FAR))]

when HR ≥ FAR and

Bd = [(FAR · (1 – FAR)) – (recall · (1 – recall))] / [(recall · (1 – recall)) + (FAR · (1 – FAR))]

otherwise (i.e., when FAR > HR).

You may also be familiar with β, a parametric measure of bias, but there does not seem to be anything to recommend it over Bd”, which makes fewer assumptions (see Donaldson 1992 and citations therein).

Cohen’s Κ

Cohen’s Κ (“kappa”) is a statistical measure of interannotator agreement which works on 2×2 tables. Unlike other measures we have reviewed so far, it is adjusted for the percentage of agreement that would occur by chance. Κ is computed from two terms. The first, P(a), is the observed probability of agreement, which is the same formula as accuracy. The second, P(e), is the probability of agreement due to chance. Let Pand Py be the probability of a “yes” or “hit” answer from annotator x and y, respectively. Then, P(e) is

Px Py + (1 – Px) (1 – Py)

and Κ is then given by

[P(a) – P(e)] / [1 – P(e)] .

For the previous 2×2 table, Κ = .5947; but, what does this mean? K is usually interpreted with reference to conventional—but entirely arbitrary—guidelines. One of the best known of these is due to Landis and Koch (1977), who propose 0–0.20 as “slight”, 0.21–0.40 as “fair agreement”, 0.41–0.60 as “moderate”, 0.61–0.80 as “substantial”, and 0.81–1 as “almost perfect” agreement. Κ has a known statistical distribution, so it is also possible to test the null hypothesis that the observed agreement is entirely due to chance. This test is rarely performed or reported, however, as the null hypothesis is exceptionally unlikely to be true in real-world annotation scenarios.

(h/t: Steven Bedrick.)

References

A. Agresti. 2002. Categorical data analysis. Hoboken, NJ: Wiley.
W. Donaldson. 1992. Measuring recognition memory. Journal of Experimental Psychology: General 121(3): 275-277.
J.R. Landis & G.G. Koch. 1977.The measurement of observer agreement for categorical data. Biometrics 33(1): 159-174.

Leave a Reply

Your email address will not be published. Required fields are marked *