Major projects at the Computational Linguistics lab

[The following is geared towards our incoming students. I’m just using the blog as a easy publishing mechanism.]

The following are some major projects ongoing in the GC Computational Linguistics Lab.

Many phonologists believe that phonotactic knowledge is independent of knowledge of phonological alternations. In my dissertation I evaluated computational models of autonomous phonotactic knowledge as predictions of speakers’ judgments of wordlikeness, and I found that these fail to consistently outperform simple baselines. In part, these models fail because they predict gradience that is poorly correlated with human judgments. However, these conclusions were tentative because of the poor quality of the available data, collected little attention paid to experimental design or choice of stimuli. With funding from the National Science Foundation, and in collaboration with professors Karthik Durvasula at Michigan State University and Jimin Kahng at the University of Mississippi, we are building a open-source “megastudy” of human wordlikeness judgments and performing computational modeling of the resulting data.

Speech recognizers and synthesizers are, essentially, engines for synthesizing or recognizing sequences of phonemes. Therefore, it is necessary to transform text into phoneme sequences. Such transformations are challenging insofar as they require linguistic expertise—and language-specific knowledge—and are not always amenable to generic machine learning techniques. We are engaged in several projects involving these mappings. The lab maintains WikiPron (Lee et al. 2020), software and databases for building multilingual pronunciation dictionaries, and has organized two SIGMORPHON shared tasks on multilingual grapheme-to-phoneme conversion (Gorman et al. 2020, Ashby et al. 2021). And with funding from the CUNY Professional Staff Congress, PhD student Amal Aissaoui is engaged building diacritization engines for Arabic and Latin, engines which supply missing pronunciation information for these scripts.

Morphological generation systems use machine learning to predict the inflected forms of words. In 2019 I led a team of researchers in an error analysis of the top two systems in the CoNLL-SIGMORPHON 2017 shared task on morphological generation (Gorman et al. 2019). We found that the top models struggled with inflectional patterns which are sensitive to lexeme-inherent morphosyntactic features like gender, animacy, and aspect, which are not provided in the task data. For instance, the top models often inflect Russian perfective verbs as if they were imperfective, or Polish inanimate nouns as if they were animate. Finally, we find that models struggle with abstract morphophonological patterns which cannot be inferred from the citation form alone. For instance, the top models struggle to predict whether or not a Spanish verb will undergo diphthongization under stress (e.g., negarniego ‘to deny-I deny’ vs. pegarpego ‘to stick-I stick’). In collaboration with professor Katharina Kann and PhD student Adam Weimerslage at the University of Colorado, Boulder, we are developing an open-source “challenge set” for morphological generation, a set that targets complex inflectional patterns in a diverse sample of 10-20 languages. This challenge set will act as benchmarks for neural network models of inflection, and will allow us to further study inherent features and abstract morphophonological patterns. In designing these challenge sets we have targeted a wide variety of morphological processes, including reduplication and templatic formation in addition to affixation and stem change. MA students Kristysha Chan, Mariana Graterol, and M. Elizabeth Garza, and PhD student Selin Alkan have all contributed to the development of this challenge set thus far.

Inflectional defectivity is the poorly-understood dark twin of productivity. With funding from the CUNY Professional Staff Congress, Emily Charde (MA 2020) is engaged in a computational study of defectivity in Greek nouns and Russian verbs.

Linguistics’ contribution to speech & language processing

How does linguistics contribute to speech & language processing? While there exist some “linguist eliminationists”, who wish to process speech audio or text “from scratch” without intermediate linguistic representations, it is generally recognized that linguistic representations are the end goal of many processing “tasks”. Of course some tasks involve poorly-defined, or ill-posed, end-state representations—the detection of hate speech and named entities, neither of which are particularly well-defined, linguistically or otherwise, come to mind—but are driven by apparent business value to be extracted rather than serious goals to understand speech or text.

The standard example for this kind of argument is syntax. It might be the case that syntactic representations are not as useful for textual understanding as was anticipated, and useful features for downstream machine learning can apparently be induced using far simpler approaches, like the masked language modeling task used for pre-training in many neural models. But it’s not as if a terrorist cell of rogue linguists locked NLP researchers in their office until they developed the field of natural language parsing. NLP researchers decided, of their own volition, to spend the last thirty years building models which could recover natural language syntax, and ultimately got pretty good at it, probably getting up to the point where, I suspect, unresolved ambiguities mostly hinge on world knowledge that is rarely if ever made explicit.

Let us consider another example, less widely discussed: the phoneme. The phoneme was discovered in the late 19th century by Baudouin de Courtenay and Kruszewski. It has been around a very long time. In the century and a half since it emerged from the Polish academy, Poland itself has been a congress, a kingdom, a military dictatorship, and a republic (three times), and annexed by the Russian empire, the German Reich, and the Soviet Union. The phoneme is probably here to stay. The phoneme is, by any reasonable account, one of the most successful scientific abstractions in the history of science.

It is no surprise then, that the phoneme plays a major role in speech technologies. Not only did the first speech recognizers and synthesizers make explicit use of phonemic representations (as well as notions like allophones), so did the next five decades worth of recognizers and synthesizers. Conventional recognizers and synthesizers require large pronunciation lexicons mapping between orthographic and phonemic form, and as they get closer to speech, convert these “context-independent” representations of phonemic sequences onto “context-dependent” representations which can account for allophony and local coarticulation, exactly as any linguist would expect. It is only in the last few years that it has even become possible to build a reasonably effective recognizer or synthesizer which doesn’t have an explicit phonemic level of representation. Such models instead use clever tricks and enormous amounts of data to induce implicit phonemic representations instead. We have every reason to suspect these implicit representations are quite similar to the explicit ones linguists would posit. For one, these implicit representations are keyed to orthographic characters, and as I wrote a month ago, “the linguistic analysis underlying a writing system may be quite naïve but may also encode sophisticated phonemic and/or morphemic insights.” If anything, that’s too weak: in most writing systems I’m aware of, the writing system is either a precise phonemic analysis (possibly omitting a few details of low functional load, or using digraphs to get around limitations of the alphabet of choice) or a precise morphophonemic analysis (ditto). For Sapir (1925, et. seq.) this was key evidence for the existence of phonemes! So whether or not implicit “phonemes” are better than explicit ones, speech technologists have converged on the same rational, mentalistic notions discovered by Polish linguists a century and a half ago.

So it is surprising to me that even those schooled in the art of speech processing view the contribution of linguistics to the field in a somewhat negative light. For instance, Paul Taylor, the founder of the TTS firm Phonetic Arts, published a Cambridge University Press textbook on TTS methods in 2009, and while it’s by now quite out of date, there’s no more-recent work of comparable breadth. Taylor spends the first five hundred (!) pages or so talking about linguistic phenomena like phonemes, allophones, prosodic phrases, and pitch accents—at the time, the state of the art in synthesis made use of explicit phonological representations—so it is genuinely a shock to me that Taylor chose to close the book with a chapter (Taylor 2009: ch. 18) about the irrelevance of linguistics. Here are a few choice quotes, with my commentary.

It is widely acknowledged that researchers in the field of speech technology and linguistics do not in general work together. (p. 533)

It may be “acknowledged”, but I don’t think it has ever been true. The number of linguists and linguistically-trained engineers working on FAANG speech products every day is huge. (Modern corporate “AI” is to a great degree just other people, mostly contractors in the Global South.) Taylor continues:

The first stated reason for this gap is the “aeroplanes don’t flap their wings” argument. The implication of this statement is that, even if we had a complete knowledge of how human language worked, it would not help us greatly because we are trying to develop these processes in machines, which have a fundamentally different architecture. (p. 533)

I do not expect that linguistics will provide deep insights about how to build TTS systems, but it clearly identified the relevant representational units for building such systems many decades ahead of time, just as mechanics provided the basis for mechanical engineering. This was true of Kempelen’s speaking machine (which predates phonemic theory, and so had to discover something like it) and Dudley’s voder as well as speech synthesizers in the digital age. So I guess I kind of think that speech synthesizers do flap their wings: parametric, unit selection, hybrid, and neural synthesizers are all big fat phoneme-realization machines. As is standard practice in physical sciences, the simple elementary particles of phonological theory—phonemes, and perhaps features—were discovered quite early on, but it the study of their onotology has taken up the intervening decades. And unlike the physical sciences, us cognitive scientists some day must also understand their epistemology (what Chomsky calls “Plato’s problem”) and ultimately, their evolutionary history (“Darwin’s problem”) too. Taylor, as an engineer, need not worry himself about these further studies, but I think he is being widely uncharitable about the nature of what he’s studying, or the business value of having a well-defined hypothesis space of representations for his team to engineer around in.

Taylor’s argument wouldn’t be complete without a caricature of the generative enterprise:

The most-famous camp of all is the Chomskian [sic] camp, started of course by Noam Chomsky, which advocates a very particular approach. Here data are not used in any explicit sense, quantitative experiments are not performed and little stress is put on explicit description of the theories advocated. (p. 534)

This is nonsense. Linguistic examples are data, in some cases better data than results from corpora or behavioral studies, as the work of Sprouse and colleagues has shown. No era of generativism was actively hostile to behavioral results; as early as the ’60s, generativist-aligned psycholinguists were experimentally testing the derivational theory of complexity and studying morphological decomposition in the lab. And I simply have never found that generativist theorizing lacks for formal explicitness; in phonology, for instance, the major alternative to generativist thinking is exemplar theory—which isn’t even explicit enough to be wrong—and a sort of neo-connectionism—which ought not to work at all given extensive proof-theoretic studies of formal learnability and the formal properties of stochastic gradient descent and backpropagation. Taylor continues to suggest that the “curse of dimensionality” and issues of generalizability prevent application of linguistic theory. Once again, though, the things we’re trying to represent are linguistic notions: machine learning using “features” or “phonemes”, explicit or implicit, is still linguistics.

Taylor concludes with some future predictions about how he hopes TTS research will evolve. His first is that textual analysis techniques from NLP will become increasingly important. Here the future has been kind to him: they are, but as the work of Sproat and colleagues has shown, we remain quite dependent on linguistic expertise—of a rather different and less abstract sort than the notion of the phoneme—to develop these systems.

References

Sapir, E. 1925. Sound patterns in language. Language 1:37-51.
Taylor, P. 2009. Text-to-Speech Synthesis. Cambridge University Press.

Magic and productivity: Spanish metaphony

In Gorman & Yang 2019 (henceforth GY), we provide an analysis of metaphonic patterns in Spanish. This is just one of four or five case studies and it is a bit too brief to go into some interesting representational issues. In this post I’ll try to fill some of the missing details as I understand them, with the caveat that Charles does not necessarily endorse any of my proposals here.

The tolerance principle approach to productivity is somewhat unique in that it is not tied to any particular theory of rules or representations, so long as such theories provide a way to encode competing rules applying in order of decreasing specificity (Pāṇini’s principle or the elsewhere principle). Yet any particular tolerance analysis requires us to commit to a specific formal analysis of the phenomenon⁠—the relevant rules and the representations over which they operate—so that we know what to count. The way in which I apply the tolerance principle also presumes that productivity (e.g., as witnessed by child overregularization errors) or its lack (as witnessed by inflectional gaps) is a first-class empirical observation and that any explanatorily-adequate tolerance analysis ought to account for it. What this means to me is that the facts productivity can adjudicate between different formal analyses, as the following example shows.

The facts are these. A large percentage of Spanish verbs, all of which have a surface mid vowel (e or o) in the infinitive, exhibit alternations targeting the nucleus of the final syllable of the stem. In all three conjugations, one can find verbs in which this surface mid vowel diphthongizes to ie [je] or ue [we], respectively.1 Furthermore, in the third conjugation, there is a class of verbs in which the e in the final syllable of certain forms alternates with an i.2

The issue, of course, is that there are verbs which are almost identical to the diphthongizing or ei stems but which do not undergo these alternations (GY:178f.). One can of course deny that magic is operating here, but this does not seem workable.3 We need therefore to identify the type of magic: the rules and representations involved.

There is some reason to think that conjugation class is relevant to these verb stem alternations. For example, Mayol et al. (2007) analyzes verb stem errors in a sample of six children acquiring Spanish, a corpus of roughly 2,000 verb tokens. Nearly all errors in this corpus involve underapplication of diphthongization to diphthongizing verbs in the first and second conjugation; errors in the third conjugation are extremely rare. Secondly the e-i alternations are limited to the third conjugation. As Harris (1969:111)  points out, the e form surfaces only when the stem is followed by an i in the first syllable of the desinence. This suggests that the alternation is a lowering rather than a raising one, and explains why this pattern is confined to the third (-i-) conjugation. Finally, there are about a dozen Spanish verbs, all of the third conjugation, which are defective in exactly those inflectional forms—those in which there is either stress on the stem or those in which the stem is followed by a desinential /i/ in the following syllable—which would reveal to us whether the stem is diphthongization or lowering. These three facts seem to be telling us that these alternations are sensitive to conjugation class.

Jim Harris has long argued for an abstract phoneme analysis of Spanish diphthongization. In Harris 1969, diphthongization reflect abstract phonemes, present underlyingly, denoted /E, O/; no featural decomposition is provided, but one could imagine that they are underspecified for some features related to height. Harris (1985) instead supposes that the vowels which undergo diphthongization under stress bear two skeletal “x” slots, one linked and one unlinked, as follows.

o
|
X X

This distinguishes them from ordinary non-alternating mid vowels (which only have one “x”) and non-alternating diphthongs (which are prelinked to two “x”s). Harris argues this also provides explanation for why stress conditions this alternation.

One interesting property of Harris’ account, one which I do not believe has been remarked on before, it is that it seems to rule out the idea that diphthongization vs. non-diphthongization is “governed by the grammar”: it is purely a fact of lexical representation and surface forms follow directly from applying the rules to the abstract phonemic forms. To put it more fancifully, there is no “daemon” inside the phonemic storage unit of the lexicon deciding where the diphthongs or lowering vowels go; such facts are of interest for “evolutionary” theorizing, but are accidents of diachrony.

However, I believe the facts of productivity and the conditioning effects of conjugation support an alternative—and arguably more traditional—analysis, in which diphthongization and lowering are governed by abstract diacritics at the root level, in the form of rule features of the sort proposed by Kisseberth (1970) and Lakoff (1970).

I propose that verbs with mid vowel in the final syllable of their stem which do not undergo diphthongization, like pegar ‘to stick to’; (e.g., pego ‘I stick to’), are marked [−diph], and those which do undergo diphthongization, like negar ‘to deny’ (niego ‘I deny’) are marked [+diph]; both are assumed to have an /e/ in underlying form. Similarly, I propose that verbs which undergo lowering, like pedir ‘to ask for’ (e.g., pido ‘I ask for’), are specified [+lowering] and non-lowering verbs, like vivir ‘to live’ (vivo ‘I live), are specified [−lowering]; both have an underlyingly /i/. Then, the rule of lowering is

Lowering: i -> e / __ C_0 i

or, in prose, an /i/ lowers to /e/ when followed by zero or more consonants and a /i/. I assume a convention of rule application such that rule R can apply only to those /i/s which are part of a root marked [+R]; it is as if there is an implicit [+R] specification on the rule’s target. Therefore, the rule of lowering does not apply to vivir. This rule feature convention is assumed to apply to all phonological rules, including diphthongization.

I furthermore propose that [diph] and [lowering] rule features are inserted during the derivation according to GY’s tolerance analysis. For first (-a-) and second (-e-) conjugation verbs, [−diph] is the default and [+diph] is lexically conditioned.

[] -> [+diph] / __ {√neg-, ...}
   -> [-diph] / __

For third (-i-) conjugation verbs, I assume that there is no default specification for either rule feature.

[] -> [+lowering] / __ {√ped-, ...}
[] -> [-lowering] / __ {√viv-, ...}

I have not yet provided formal machinery to limit these generalizations to the particular conjugations, but I wish to stay agnostic about morphological theory and so I assume that any adequate model of the morphophonological interface ought to be able to encode conjugation class-specific generalizations like the above.

I leave open the question as to how roots which fail to satisfy the phonological conditions for lowering (like those which do not contain a final-syllable /i/) or diphthongization (like those which do not contain a final-syllable mid vowel) are specified with respect to the [diph] and [lowering] features. I am inclined to say that they remain underspecified for these features throughout the derivation. However, all that is essential here is that such roots are not in scope for the tolerance computation.

Let us suppose that we wish to encode, synchronically, phonological “trends” in the lexicon with respect to the distribution of diphthongizing and/or lowering verbs, such as Bybee & Pardo’s claim that eie diphthongization is facilitated when followed by the trill rr. Such observations could be encoded at the point in which rule features are inserted, if desired. It is unclear how a similar effect might be achieved under the abstract phoneme analysis. I remain agnostic on this question, which may ultimately bear on the past tense debate.

In future work (if blogging can be called “work”), it would be interesting to expand the proposal to other cases of morpholexical behavior studied by Kisseberth (1970), Lakoff (1970), and Zonneveld (1978), among others. Yet my proposal does not entail that we draw similar conclusions for all superficially similar case studies. For instance, I am unaware at present of evidence contradicting Rubach’s (2016) arguments that the Polish yers are abstract phonemes.

Endnotes

  1. Let us assume, as does Harris, that the appearance of the [e] in both diphthongs is the result of a default insertion rule applying after diphthongization converts the nucleus to the corresponding glide.
  2. This of course does not exhaust the set of verbal alternations, as there are highly-irregular consonantal and vocalic alternations in a handful of other verbs.
  3. Albright et al. (2001) and Bybee & Pardo (1981) are sometimes understood to have found solid evidence for a “non-magical” analysis, in which the local context in which a stem mid vowel is found is the sole determinant. This is a massive overinterpretation. Bybee & Pardo identify some local contexts which seem to favor diphthongization, and the results of a small nonce word cloze task are consistent with these findings. Albright et al. use a simple computational model to discover some contexts which seem to favor diphthongization, and find that subjects’ ratings of possible nonce words (on a seven-point Likert scale) are correlated with the models’ predictions for diphthongization. Schütze (2005) gives a withering critique of the general nonce word rating approach. Even ignoring this, neither study links nonce word tasks in adult knowledge of, or child acquisition of, actual Spanish words.

References

Albright, A., Andrade, A., and Hayes, B. 2001. Segmental environments of Spanish diphthongization. UCLA Working Papers in Linguistics 7: 117-151.
Baković, E., Heinz, J., and Rawski, J. In press. Phonological abstractness in the mental lexicon. In The Oxford Handbook of the Mental Lexicon, to appear.
Bale, A., and Reiss, C. 2018. Phonology: a Formal Introduction. MIT Press.
Bybee, J., and Pardo, E. 1981. Morphological and lexical conditioning of rules: experimental evidence from Spanish. Linguistics 19: 937-968.
Gorman, K. and Yang, C. 2019. When nobody wins. In F. Rainer, F. Gardani, H. C. Luschützky and W. U. Dressler (ed.), Competition in Inflection and Word Formation, 169-193. Springer.
Harris, J. 1969. Spanish Phonology. MIT Press.
Harris, J. 1985. Spanish diphthongisation and stress: a paradox resolved. Phonology Yearbook 2:31-45.
Lakoff, G. 1970. Irregularity in Syntax. Holt, Rinehart and Winston.
Kisseberth, C. W. 1970. The treatment of exceptions. Papers in Linguistics 2:44-58.
Mayol, Laia. 2007. Acquisition of irregular patterns in Spanish verbal morphology. In Proceedings of the Twelfth ESSLLI Student Session, 1-11.
Schütze, C. 2005. Thinking about what we are asking speakers to do. In S. Kepser and M. Reis (ed.), Linguistic Evidence: Empirical, Theoretical, and Computational Perspectives, pages 457-485. Mouton de Gruyter.
Zonneveld, W. 1978. A Formal Theory of Exceptions in Generative Phonology. Peter de Ridder.

Noam on neural networks

I just crashed a Zoom conference in which Noam Chomsky was the discussant. (What I have to say will be heavily paraphrased: I wasn’t taking notes.) One back-and-forth stuck with me. Someone asked Noam what people interested in language and cognition ought to study, other than linguistics itself. He mentioned various biological systems, and said however, that they probably shouldn’t bother to study neural networks, since they have very little in common with intelligent biological systems (despite their branding as “neural” and “brain-inspired”). He stated that he is grateful for Zoom closed captions  (he has some hearing loss), but that one should not conflate that with language understanding. He said, similarly, that he’s grateful for snow plows, but one shouldn’t confuse such a useful technology with theories of the physical world.

For myself, I think they’re not uninteresting devices, and that linguists are uniquely situated to evaluate them—adversarily, I hope—as models of language. I also think they can be viewed as powerful black boxes for studying the limits of domain-general pattern learning. Sometimes we actually want to ask whether certain linguistic information is actually present in the input, and some of my work (e.g., Gorman et al. 2019) looks at that in some detail. But I do share some intuition that they are not likely to greatly expand our understanding of human language overall.

References

Gorman, K., McCarthy, A. D., Cotterell, R., Vylomova, E., Silfverberg, M., and Markowska, M. Weird inflects but OK: making sense of morphological generation errors. In Proceedings of the 23rd Conference on Computational Natural Language Learning, pages 140-151.

On “from scratch”

For a variety of historical and sociocultural reasons, nearly all natural language processing (NLP) research involves processing of text, i.e., written documents (Gorman & Sproat 2022). Furthermore, most speech processing research uses written text either as input or output.

A great deal of speech or language processing treads words (however they are understood) as atomic, indivisible units rather than the “intricately structured objects linguists have long recognized them to be” (Gorman in press). But there has been a recent trend to instead work with individual Unicode codepoints, or even the individual bytes of a Unicode string encoded in UTF-8. When such systems are part of an “end-to-end” neural network, these systems are sometimes said to be “from scratch”; see, e.g., Gillick et al. 2016 and Li et al. 2019, who both use this exact phrase to describe their contributions. There is an implication that such systems, by bypassing the fraught notion of word, have somehow eliminated the need for linguistic insight altogether.

The expression “from scratch” makes an analogy to baking: it is as if we are making angel food cake by sifting flour, superfine sugar, and cream of tartar, rather than using the “just add water and egg whites” mixes from Betty Crocker. But this analogy understates just how much linguistic knowledge can be baked in (or perhaps “sifted in”) to writing systems. Writing systems are essentially a type of linguistic analysis (Sproat 2010), and like any language technology, they necessarily reify the analysis that underlies them.1 The linguistic analysis underlying a writing system may be quite naïve but may also encode sophisticated phonemic and/or morphemic insights. Thus written text, whether expressed as Unicode codepoints or UTF-8 bytes, may have quite a bit of linguistic knowledge sifted and folded in.

A familiar and well-known example of this kind of knowledge comes from English (Gorman in press). In this language, changes in vowel quality triggered by the addition of “level 1” suffixes like -ity are generally not indicated in written form. Thus sane [seɪn] and sanity [sæ.nɪ.ti], for instance, are spelled more similarly than they are pronounced (Chomsky and Halle 1968: 44f.), meaning that this vowel change need not be modeled when working with written text.

Endnotes

  1. The Sumerian and Egyptian scribes were thus history’s first linguists, and history’s first language technologists.

References

Chomsky, N., and Halle, M. 1968. Sound Pattern of English. Harper & Row.
Gillick, D., Brunk, C., Vinyals, O., and Subramanya, A. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296-1306.
Gorman, K.. In press. Computational morphology. In Aronoff, M. and Fudeman, K., What is Morphology? 3rd edition. Blackwell.
Gorman, K., and Sproat, R. 2022. The persistent conflation of writing and language. Paper presented at Grapholinguistics in the 21st Century.
Li, B., Zhang, Y., Sainath, T., Wu, Y., and Chan, W. 2019. Bytes are all you need: end-to-end multilingual speech recognition and synthesis with bytes. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5621-5625.
Sproat, R. 2010. Language, Technology, and Society. Oxford University Press.

The computational revolution in linguistics

(Throughout this post, I have taken pains not to name any names. The beauty of subtweeting and other forms of subposting is that nobody knows for sure you’re the person being discussed unless you volunteer yourself. So, don’t.)

One of the more salient developments in linguistics as a discipline over the last two decades is the way in which computational knowledge has diffused into the field.1 20 years ago, there were but a handful of linguistics professors in North America who could perform elaborate corpus analyses, apply machine learning and statistical analysis, or extract acoustic measurements from an audio file. And, while it was in some ways quite robust, speech and language processing at the turn of the last century simply did not hold the same importance it does nowadays.

While some professors—including, to their credit, many of my mentors and colleagues—can be commended for having “skilled up” in the intervening years, this knowledge has, I am sad to say, mostly advanced one death (and subsequent tenure line renewal) at a time. This has negative consequences for linguistics students who want to train for or pivot to a career in the tech sector, since there are professors who were, in their time, computationally sophisticated, but lack the skills a rising computational linguist is expected to have mastered. In an era of contracting tenure rolls and other forms of casualization in the academy, this has the risk of pushing out legitimate, albeit staid, lines of linguistic inquiry in favor of areas favored by capitalists.2

Yet I believe that this upskilling has a lot to contribute to linguistics as a discipline. There are many core questions about language use, acquisition, variation, and change which are best answered with a computational simulation that forces us to be explicit about our assumptions, or a corpus study that tells us what people really said, or a statistical analysis that tells us whether our correlations are likely to be meaningful, or even a machine learning system that helps us rapidly label linguistic data.3 It is a boon to our field that linguists of any age can employ these tools when appropriate.

This is not to say that the transition has not been occasionally ugly. First, there are the occasional nasty turf wars over who exactly is a linguist.4 Secondly, the standards of quality for work in this area must be negotiated and imposed. While a syntax paper in NL&LT from even 30 years ago are easily readable today, the computational methods of even widely-praised paper from 15 or 20 years ago are, frankly, often quite sloppy. I have found it necessary to explain this to students who want to interact with this older work lest they lower their own methodological standards.

I discern at least a few common sloppy habits in this older computational work, focusing for the moment on computational cognitive models of linguistic behavior.

  1. If a proposed computational model is compared to some “baseline” or older model, this older model is usually an ancient associationist model from psychology. This older model naturally lacks much of the rich linguistic specifications of the proposed model, and naturally it fails to model the data. Deliberately picking a bad baseline is putting one’s finger on the scale.
  2. Comparison of different computational models is usually informal. One should instead use statistical model comparison methods.
  3. The dependent variable for modeling is often derived from poorly-designed human subjects experiments. The subjects in these experiments may be instructed to perform a task they are unlikely to be able to do consciously (i.e., the tasks are cognitively impenetrable). Unjustified assumptions about appropriate scales of measurement may have been made. Finally, the n‘s are often needlessly small. Computational cognitive models demand high-quality measures of the behaviors they’re meant to model.
  4. Once the proposed model has been shown better than the baseline, it is reified far beyond what the evidence suggests. Computational cognitive modeling can at most show that certain explicit assumptions are consistent with the observed data: they cannot establish much beyond that.

The statistician Andrew Gelman writes that scientific discourse sometimes proceeds as if earlier published work has additional claim to truth than later research that is critical of the original findings (which may or may not be published yet).5 Critical interpretation of this older computational work is increasingly called for, as our methodological standards continue to mature. I find reviewers (and literature-reviewers) overly deferential to prior work of dubious quality simply because of its priority.

Endnotes

  1. An under-appreciated element to this process is that it is is simply easier to do linguistically-relevant things with computers than it was 20 years prior. For this, one should thank Python and R, NumPy and Scikit-learn, and of course tools like Praat and Parselmouth.
  2. I happen to think college education should not be merely vocational training.
  3. I happen to think most of these questions can be answered with a cheap laptop,  and only a few require a CUDA-enabled GPU.
  4. I suspect this is mostly a response to the rapidly casualizing academy. Unfortunately, any question about whether we should be doing X in linguistics is misinterpreted as a question about whether people who do X deserve to have a job. This is a presupposition failure for me: I believe everyone deserves meaningful work, and that academic tenure is a model of labor relations that should be expanded beyond the academy.
  5. To free ourselves of this bias, Gelman proposes what he calls the time-reversal heuristic, in which one imagines the temporal order reversed (e.g., that the later failed replication is now the first published result on the matter) and then re-evaluates the evidence. When interacting with older computational work, similar  thinking is called for here.

A* shortest string decoding for non-idempotent semirings

I recently completed some work, in collaboration with Google’s Cyril Allauzen, on a new algorithm for computing the shortest string through weighted finite-state automaton. For so-called path semirings, the shortest string is given by the shortest path, but up until now, there was no general-purpose algorithm for computing the shortest string over non-idempotent semirings (like the log or probability semiring). Such an algorithm would make it much easier to decode with interpolated language models or elaborate channel models in a noisy-channel formalism. In this preprint, we propose such an algorithm using A* search and lazy (“on-the-fly”) determinization, and prove that it is correct. The algorithm in question is implemented in my OpenGrm-BaumWelch library by the baumwelchdecode command-line tool.

WFST talk

I have posted a lightly-revised slide deck from a talk I gave at Johns Hopkins University here. In it, I give my most detailed-yet description of the weighted finite-state transducer formalism and describe two reasonably interesting algorithms, the optimization algorithm underlying Pynini’s optimize method and Thrax’s Optimize function, and a new A*-based single shortest string algorithm for non-idempotent semirings underlying BaumWelch’s baumwelchdecode CLI tool.

Evaluations from the past

In a literature review, speech and language processing specialists often feel tempted to report evaluation metrics like accuracy, F-score, or word error rate for systems described in the literature review. In my opinion, this is only informative if the prior and present work use the exact same data set(s) for evaluations. (Such results should probably be presented in a table along with results from the present work, not in the body of the literature review.) If instead, they were tested on some proprietary data set, an obsolete corpus, or a data set the authors of the present work have declined to evaluate on, this information is inactionable. Authors should omit this information, and reviewers and editors should insist that it be omitted.

It is also clear to me that these numbers are rarely meaningful as measures of how difficult a task is “generally”. To take an example from an unnamed 2019 NAACL paper (one guilty of the sin described above), word error rates on a single task in a single language range between 9.1% and 23.61% (note also the mixed precision). What could we possibly reason from this enormous spread of results across different data sets?