Evaluations from the past

In a literature review, speech and language processing specialists often feel tempted to report evaluation metrics like accuracy, F-score, or word error rate for systems described in the literature review. In my opinion, this is only informative if the prior and present work use the exact same data set(s) for evaluations. (Such results should probably be presented in a table along with results from the present work, not in the body of the literature review.) If instead, they were tested on some proprietary data set, an obsolete corpus, or a data set the authors of the present work have declined to evaluate on, this information is inactionable. Authors should omit this information, and reviewers and editors should insist that it be omitted.

It is also clear to me that these numbers are rarely meaningful as measures of how difficult a task is “generally”. To take an example from an unnamed 2019 NAACL paper (one guilty of the sin described above), word error rates on a single task in a single language range between 9.1% and 23.61% (note also the mixed precision). What could we possibly reason from this enormous spread of results across different data sets?

A minimalist project design for NLP

Let’s say you want to build a new tagger, a new named entity recognizer, a new dependency parser, or whatever. Or perhaps you just want to see how your coreference resolution engine performs on your new database of anime reviews. So how should you structure your project? Here’s my minimalist solution.

There are two principles that guide my design. The first one is modularity. Some of these components will get run many times, some won’t. If you’re doing model comparison—and you should be doing model comparison—some components will get swapped out with someone else’s code. This sort of thing is a major lift unless you opt for modularity. The second principle is filesystem state. The filesystem is your friend. If your embedding table eats up all your RAM and you have to restart, the filesystem will be in roughly the same state as when you left. The filesystem allows you to organize things into directories and subdirectories, and give the pieces informative names; I like to record information about datasets and hyperparameter values in my file and directory names. So without further ado, here are the recommended scripts or applications to create when you’re starting off on a new project.

  1. split takes the full dataset and a random seed (which you should store for later) as input. The script reads the data in, randomly shuffles the data, and then splits it into an 80% training set, 10% development set, and a 10% test (i.e., evaluation set) which it then outptus. If you’re comparing to prior work that used a “standard split” you may want to have a separate script that generates that too, but I strongly recommend using randomly generated splits.
  2. train takes the training set as input and outputs a model file or directory. If you’re automating hyperparameter tuning you will also want to provide the development set as input; if not you will probably want to either add a bunch of flags to control the hyperparameters or allow the user to pass some kind of model configuration file (I like YAML for this).
  3. apply takes as input the model file(s) produced in (2) and the test set, and applies the model to the data, outputting a new hypothesized test data set (i.e., the model’s predictions). One open question is whether this ought to take only unlabeled data or should overwrite the existing labels: it depends.
  4. evaluate takes as input the gold test set and the hypothesized test data set generated in (3) and outputs the evaluation results (as text or in some structured data format—sometimes YAML is a good choice, other times TSV files will do). I recommend you test this with a small amount of data first.

That’s all there’s to it. When you begin doing model comparison you may find yourself swapping out (2-3) for somebody else’s code, but make sure to still stick to the same evaluation script.

A tutorial on contingency tables

Many results in science and medicine can be compactly represented as a table containing the co-occurrence frequencies of two or more discrete random variables. This data structure is called the contingency table (a name suggested by Karl Pearson in 1904). This tutorial will cover descriptive and inferential statistics that can be used on the simplest form of contingency table, in which both the outcomes are binomial (and thus the table is 2×2).

Let’s begin by taking a look at a real-world example: graduate school admissions a single department at UC Berkeley in 1973. (This is part of a famous real-world example which may be of interest to the reader.) Our dependent variable is “admitted” or “rejected”, and we’ll use applicant gender as an independent variable.

Admitted Rejected
Male 120 205
Female 202 391

I can scarcely look at this table without seeing the inevitable question: are admissions in this department gender biased?

Odds ratios

37% (= 120 / 325) of the male applicants who applied were admitted, and 34% (= 202 / 593) of the female applicants were. Is that 3% difference a meaningful one? It is tempting to focus on “3%”, but the researcher should absolutely avoid this temptation. The magnitude of the difference between admission rates in the two groups (defined by the independent variable) is very sensitive to the base rate, in this case the overall admission rate. Intuitively, if 2% of males were admitted and only 1% of females, we would definitely consider the possibility that admissions are gender-biased: we would estimate that males are twice as likely to be admitted as females But we would be much less likely to say there is an admissions bias if those percentages were 98% and 99%. Yet, in both scenarios the admission rates differ by exactly 1%.

A better way to quantify the effect of gender on admissions—a method that is insensitive to the overall admission rate—is the odds ratio. This name is practically the definition if you are familiar with the notion odds. The odds of some event occurring with probability P is simply P / (1 – P). In our example above, the odds of admission for a male applicant is 0.5854, and is 0.5166 for a female applicant. The ratio of these two is 1.1334 (= 0.5854 / 0.5166). As this ratio is greater than one, we say that maleness was associated with admission. This is not enough to establish bias: it simply means that males were somewhat more likely to be admitted than females.

Tests for association

We can now return to the original question: is this table likely to have arisen if there is in fact no gender bias in admissions? Pearson’s chi-squared test estimates the probability of the observed contingency table under the null hypothesis that there is no association between and y; see here for a worked example. We reject the null hypothesis that there is no association when this probability is sufficiently small (often at P < .05). For this table, χ2 = 0.6332, and the probability of the data under the null hypothesis (no association between gender and admission rate) is P(χ2) = 0.4262. So, we’d probably say the observed difference in admission rates was not sufficient to establish that females were less likely to be admitted than males in this department.

The chi-squared test for contingency tables depends on an approximation which is asymptotically valid, but inadequate for small samples; a popular (albeit arbitrary) rule of thumb is that a sample is “small” if any of the four cells has less than 5 observations. The best solution is to use an alternative, Fisher’s exact test; as the name suggests, it provides an exact p-value. Rather than working with the χ2 statistic, the null hypothesis for the Fisher test is that the true (population) odds ratio is equal to 1.

Accuracy, precision, and recall

In the above example, we attempted to measure the association of two random variables which represented different constructs (i.e., admission status and gender). Contingency tables can also be used to look at random variables which are in some sense imperfect measures of the same underlying construct. In a machine learning context, one variable might represent the predictions of a binary classifier, and the other represents the labels taken from the “oracle”, the trusted data source the classifier is intended to approximate. Such tables are sometimes known as confusion matrices. Convention holds that one of the two outcomes should be labeled (arbitrarily if necessary) as “hit” and the other as “miss”—often the “hit” is the one which requires further attention whereas the “miss” can be ignored—and the the following labels be assigned to the four cells of the confusion matrix:

Prediction / Oracle Hit Miss
Hit True positive (TP) False positive (FP)
Miss False negative (FN) True negative (TN)

The oracle labels are on the row, corresponding to the dependent variable—admission status—in the Berkeley example, and the prediction labels are on the column, corresponding to gender.

When both variables of the 2×2 table measure the same construct, we start with the assumption that the two random variables are associated, and instead measure agreement. The simplest measure of agreement is accuracy, which is the probability that an observation will be correctly classified.

accuracy = (TP + TN) / (TP + FP + FN + TN)

Accuracy is not always the most informative measure, for the same reason that differences in probabilities were not informative above: accuracy neglects the base rate (or prevalence). Consider, for example, the plight of the much-maligned Transportation Safety Administration (TSA). Very, very few airline passengers attempt to commit a terrorist attack during their flight. Since 9/11, there have only been two documented attempted terrorist attacks by passengers on commercial airlines, one by Richard Reid (the “shoe bomber”) and one by Umar Farouk Abdulmutallab (the “underwear bomber”), and in both cases, these attempts were thwarted by some combination of the attentive citizenry and incompetent attacker, not by the TSA’s security theater. There are approximately 650,000,000 passenger-flights per year on US flights, so according to my back-of-envelope calculation, there have been around 7 billion passenger-flights since 9/11. In the meantime, the TSA could have achieved sky-high accuracy simply by making no arrests at all. (I have, of course, ignored the possibility that security theater serves as a deterrent to terrorist activity.) A corollary is that the TSA faces what is called the false positive paradox: a false positive (say, detaining a law-abiding citizen) is much more likely than a true positive (catching a terrorist). The TSA isn’t alone: a famous paper found that few physicians used the base rate (“prevalence”) when estimating the likelihood that a patient has a particular disease, given that they had a positive result on a test.

To better account for the role of base rate, we can break accuracy down into its constituent parts. The best known of these measures is precision (or positive predictive value), which is defined as the probability that a predicted hit is correct.

precision = TP / (TP + FP)

Precision isn’t completely free of the base rate problem, however; it fails to penalize false negatives. For this, we turn to recall (or sensitivity, or true positive rate), which is the probability that a true hit is correctly discovered.

recall = TP / (TP + FN)

It is difficult to improve precision without sacrificing recall, or vis versa. Consider, for example, an information retrieval (IR) application, which takes natural language queries as input and attempts to return all documents relevant for the query. Internally, the IR system ranks all documents for relevance to the query, then returns the top n. A document which is relevant and returned by the system is a true positive, a document which is irrelevant but returned by the system is a false positive, and a document which is relevant but not returned by the system is a false negative (we’ll ignore true negatives for the time being). With this system, we can achieve perfect recall by returning all documents, no matter what the query is, though the precision will be very poor. It is often helpful for n, the number of documents retrieved, to vary as a function of query; in a sports news database, for instance, there are simply more documents about the New York Yankees than about the congenitally mediocre St. Louis Blues. We can maximize precision by reducing the average for queries, but this will also reduce recall, since there will be more false negatives.

To quantify the tradeoff between precision and recall, it is conventional to use the harmonic mean of precision and recall.

F1 = (2 · precision · recall) / (precision + recall)

This measure is known also known as the F-score (or F-measure), though it is properly called F1, since an F-score need not weigh precision and recall equally. In many applications, however the real-world costs of false positives and false negatives are not equivalent. In the context of screening for serious illness, a false positive would simply lead to further testing, whereas a false negative could be fatal; consequently, recall is more important then precision. On the other hand, when the resources necessary to derive value from true positives are limited (such as in fraud detection), false negatives are considered more acceptable than false positives, and so precision is ranked above recall.

Another thing to note about F1: the harmonic mean of two positive numbers is always closer to the smaller of the two. So, if you want to maximize F1, the best place to start is to increase whichever of the two terms (precision and recall) is smaller.

To make this all a bit more concrete, consider the following 2×2 table.

Prediction / Oracle Hit Miss
Hit 10 2
Miss 5 20

We can see that false negatives are somewhat more common than false positives, so we could have predicted that precision (0.8333) would be somewhat greater than recall (0.6667), and that F1 would be somewhat closer to the latter (0.7407).

This of course does not exhaust the space of possible summary statistics of a 2×2 confusion matrix: see Wikipedia for more.

Response bias

It is sometimes useful to directly quantify predictor bias, which can be thought of as a signed measure representing the degree to which prediction system’s base rate differs from the true base rate. A positive bias indicates that the system predicts “hit” more often than it should were it hewing to the true base rate, and a negative bias indicates that “hit” is guessed less often than the true base rate would suggest. One conventional measure of bias is Bd”, which is a function of recall and false positive rate (FAR), defined as follows.

FAR = FP / (TN + FP)

Bd has a rather unwieldy formula; it is

Bd = [(recall · (1 – recall)) – (FAR · (1 – FAR))] / [(recall · (1 – recall)) + (FAR · (1 – FAR))]

when HR ≥ FAR and

Bd = [(FAR · (1 – FAR)) – (recall · (1 – recall))] / [(recall · (1 – recall)) + (FAR · (1 – FAR))]

otherwise (i.e., when FAR > HR).

You may also be familiar with β, a parametric measure of bias, but there does not seem to be anything to recommend it over Bd”, which makes fewer assumptions (see Donaldson 1992 and citations therein).

Cohen’s Κ

Cohen’s Κ (“kappa”) is a statistical measure of interannotator agreement which works on 2×2 tables. Unlike other measures we have reviewed so far, it is adjusted for the percentage of agreement that would occur by chance. Κ is computed from two terms. The first, P(a), is the observed probability of agreement, which is the same formula as accuracy. The second, P(e), is the probability of agreement due to chance. Let Pand Py be the probability of a “yes” or “hit” answer from annotator x and y, respectively. Then, P(e) is

Px Py + (1 – Px) (1 – Py)

and Κ is then given by

[P(a) – P(e)] / [1 – P(e)] .

For the previous 2×2 table, Κ = .5947; but, what does this mean? K is usually interpreted with reference to conventional—but entirely arbitrary—guidelines. One of the best known of these is due to Landis and Koch (1977), who propose 0–0.20 as “slight”, 0.21–0.40 as “fair agreement”, 0.41–0.60 as “moderate”, 0.61–0.80 as “substantial”, and 0.81–1 as “almost perfect” agreement. Κ has a known statistical distribution, so it is also possible to test the null hypothesis that the observed agreement is entirely due to chance. This test is rarely performed or reported, however, as the null hypothesis is exceptionally unlikely to be true in real-world annotation scenarios.

(h/t: Steven Bedrick.)

References

A. Agresti. 2002. Categorical data analysis. Hoboken, NJ: Wiley.
W. Donaldson. 1992. Measuring recognition memory. Journal of Experimental Psychology: General 121(3): 275-277.
J.R. Landis & G.G. Koch. 1977.The measurement of observer agreement for categorical data. Biometrics 33(1): 159-174.

LOESS hyperparameters without tears

LOESS is a classic non-parametric regression technique. One potential issue that arises is that LOESS fits depend on several hyperparameters (i.e., parameters set by the experimenter a priori). In this post I’ll take a quick look at how to set these.

At each point in a LOESS curve, the y-value is derived from a local, low-degree polynomial weighted regression. The first hyperparameter refers to the degree of the local fits. Most users set degree to 2 (i.e., use local quadratic curves), and with good reason. At degree 1, you’re just computing a local average. Higher degrees than 2 (e.g., cubic) tend to not have much of an effect.

The other hyperparameter is “span”, which controls the degree of smoothing. A value of 0 uses no context and a value of 1 uses the entire sample (so it will be similar to fitting a single quadratic function to the data). The choice of this value has a major effect on the quality of the fit obtained:

gbu

For the randomly generated data here, large values of the span parameter (“bad”) produce a LOESS which fails to follow the larger trend, whereas small values (“ugly”) primarily model noise. For this reason alone, the experimenter should probably not be permitted to select the span hyperparameter herself.

Fortunately, there are several objectives used to determine an “optimal” setting for the span parameter. Hurvich et al. (1998) propose a particularly privileged objective, namely minimizing AICC. This has been used to generate the “good” curve above. Here’s how I did it (adapted from this post to R-help):

There is also an R package fANCOVA which apparently includes a function loess.as which automatically determines the span parameter, presumably similar to how I’ve done it here. I haven’t tried it.

PS to those inclined to care: the origins of memetic, snarky, academic “X without tears” is, to my knowledge, J. Eric S. Thompson‘s 1972 book Maya Hieroglyphs Without Tears. While I have every reason to believe Thompson was poking fun at his detractors, it’s interesting to note that he turned out to be fabulously wrong about the nature of the hieroglyphs.

Making high-quality graphics in R

There are a lot of different ways to make an R graph for TeX; this is my workflow.

In R

I use cairo_pdf to write a graph to disk. This command takes arguments for image size and for font size and face. If you’re on a Mac, you will need to install X11.

Image size

I always specify graph size by hand, in inches. For manuscripts and handouts, I usually set the width to be the printable width. If you’re using 1″ margins, that’s 6.5″. Then, I adjust height until a pleasing form emerges.

Fonts

I match the font face of the manuscript (whatever I’m using) and graph labels by passing the font name as the argument to family. This matters most if you’re writing a handout, and matters less if you’re sending it to, say, Oxford University Press, who will redo your text anyways. I found out the hard way that the family keyword argument is absent in older versions of R, so you may need to upgrade. By default, image font are 12pt. This is generally fine, but can be adjusted with the pointsize argument.

Graphing

This is a no-brainer: use ggplot2.

All together now

cairo_pdf('mygraph.pdf', width=6.5, height=4, family='Times New Roman')
qplot(X, Y, data=dat)
dev.off()

In TeX

Add usepackage{graphicx} to your preamble, if it’s not already there. In the body, includegraphics{mygraph.pdf}.