Codon math

It well-known that there are twenty “proteinogenic” amino acids—those capable of creating proteins—in eukaryotes (i.e., lifeforms with nucleated cells). When biologists first began to realize that DNA synthesizes RNA, which synthesizes amino acids, it was not yet known how many DNA bases (the vocabulary being A, T, C, and G) were required to code an animo acid. It turns out the answer is three: each codon is a base triple, each corresponding to an amino acid. However, one might have deduced that answer ahead of time using some basic algebra, as did Soviet-American polymath George Gamow. Given that one needs at least 20 aminos (and admitting that some redundancy is not impossible), it should be clear that pairs of bases will not suffice to uniquely identify the different animos: 42 = 16, which is less than 20 (+ some epsilon). However, triples will more than suffice: 43 = 64. This holds assuming that the codons are interpreted consistently independently of their context (as Gamow correctly deduced) and whether or not the triplets are interpreted as overlapping or not (Gamow incorrectly guessed that they overlapped, so that a six-base sequence contains four triplet codons; in fact it contains no more than two).

All of this is a long way to link back to the idea of counting entities in phonology.  It seems to me we can ask just how many features might be necessary to mark all the distinctions needed. At the same time, Matamoros & Reiss (2016), for instance, following some broader work by Gallistel & King (2009), take it as desirable that a cognitive theory involve a small number of initial entities that give rise to a combinatoric explosion that, at the etic level, is “essentially infinite”. Surely similar thinking can be applied throughout linguistics.

References

Gallistel, C. R., and King, A. P.. 2009. Memory and the Computational
Brain: Why Cognitive Science Will Transform Neuroscience. Wiley-Blackwell.
Matamoros, C. and Reiss, C. 2016. Symbol taxonomy in biophonology. In A. M. Di Sciullo (ed.), Biolinguistic Investigations on the Language Faculty, pages 41-54. John Benjmanins Publishing Company.

Foundation models

It is widely admitted that the use of language in terms like formal language and language model tend to mislead neophytes, since they suggest the common-sense notion (roughly, e-language) rather than the narrow technical sense referring to a set of strings. Scholars at Stanford have been trying to push foundation model as an alternative to what were previously called large language models. But I don’t really like the implication—which I take to be quite salient—that such models ought to serve as the foundation for NLP, AI, whatever. I use large language models in my research, but not that often, and I actually don’t think they have to be part of every practitioner’s toolkit. I can’t help thinking that Stanford is trying to “make fetch happen”.

Is NLP stuck?

I can’t help but feel that NLP is once again stuck.

From about 2011 to 2019, I can identify a huge step forward just about every year. But the last thing that truly excited me is BERT, which came out in 2018 and was published in 2019. For those not in the know, the idea of BERT is to pre-train a gigantic language model, with either monolingual or multilingual data. The major pre-training task is masked language model prediction: we pretend some small percentage (usualyl 15%) of the words in a sentence are obscured by noise and try to predict what they were. Ancillary tasks like predicting whether two sentences are adjacent or not (or if they were, what was their order) are also used, but appear to be non-essential. Pre-training (done a single time, at some expense, at BigCo HQ), produces a contextual encoder, a model which can embed words and sentences in ways that are useful for many downstream tasks. But then one can also take this encoder and fine-tune it to some other downstream task (an instance of transfer learning). It turns out that the combination of task-general pre-training using free-to-cheap ordinary text data and a small amount of task-specific fine-tuning using labeled data results in substantial performance gains over what came before. The BERT creators gave away both software and the pre-trained parameters (which would be expensive for an individual or a small academic lab to reproduce on their own), and an entire ecosystem of sharing pre-trained model parameters has emerged. I see this toolkit-development ecosysytem as a sign of successful science.

From my limited perspective, very little has happened since then that is not just more BERTology—that is, exploiting BERT and similar models. The only alternative on the horizon, in the last 4 years now, are pre-trained large language models without the encoder component, of which the best known are the GPT family (now up to GPT-3). These models do one thing well: they take a text prompt and produce more text that seeminly responds to the prompt. However, whereas BERT and family are free to reuse, GPT-3’s parameters and software are both closed source and can only be accessed at scale by paying a licensing fee to Microsoft. That itself is a substantial regression compared to BERT. More importantly, though, the GPT family are far less expressive tools than BERT, since they don’t really support fine-tuning. (More precisely, I don’t see any difficult technical barriers to fine-tuning GPT-style models; it’s just not supported.) Thus they can be only really used for one thing: zero-shot text generation tasks, in which the task is “explained” to the model in the input prompt, and the output is also textual. Were it possible to simply write out, in plain English, what you want, and then get the output in a sensible text format, this of course would be revolutionary, but that’s not the case. Rather, GPT has spawned a cottage industry of prompt engineering. A prompt engineer, roughly, is someone who specializes in crafting prompts. It is of course impressive that this can be done at all, but just because an orangutan can be taught to make an adequate omelette doesn’t mean I am going to pay one to make breakfast. I simply don’t see how any of this represents an improvement over the BERT ecosystem, which at least has an easy-to-use free and open-source ecosystem. And as you might expect, GPT’s zero-shot approach is quite often much worse than what one would obtain using the light supervision of the BERT-style fine-tuning approach.

Phonological nihilism

One might argue that phonology is in something of a crisis period. Phonology seems to be going through early stages of grief for what I see as the failure of teleological, substance-rich, constraint-based, parallel-evaluation approaches to make headway, but the next paradigm shift is yet to become clear to us. I personally think that logical, substance-free, serialist approaches ought to represent our next i-phonology paradigm, with “evolutionary”-historical thinking providing the e-language context, but I may be wrong and altogether different paradigm may be waiting in the wing. The thing that troubles me is that phonologists from these still-dominant constraint-based traditions seem to have less and less faith in the tenets of their theories, and in the worst case this expresses itself as a sort of nihilism. I discern two forms of this nihilism. The first is the phonologist who thinks we’re doing “word sudoku”, playing games of minimal description that produce generalizations without a shred of cognitive support. The second is the phonologist who thinks that everything is memorized, so that the actual domain of phonological generalization are just Psych 101 subject pool nonce word experiments. My pitch to both types of nihilists is the same: if you truly believe this, you ought to spend more time at the beach and less in the classroom, and save some space in the discourse for those of us who believe in something.