The alternation phonotactic hypothesis

The hypothesis

In a recent handout, I discuss the following hypothesis, implicit in my dissertation (Gorman 2013):

(1) Alternation Phonotactic Hypothesis: Let ABC, and D be (possibly-null) string sets. Then, if a grammar G contains a surface-true rule of alternation A → B / C __ D, nonce words containing the subsequence CAD are ill-formed for speakers of G.

Before I continue, note that definition is “phenomenological” in the sense that refers to two notions—alternations and surface-true-ness—which are not generally considered to be encoded directly in the grammar. Regarding the notion of alternations, it is not difficult to formalize whether or not a rule is alternating.

(2) Let a rule be defined by possibly-null string sets A, B, C, and D as in (1). Then if any elements of B are phonemes, then the rule is a rule of alternation.

(3) [ditto] If no elements of B are phonemes, then the rule is a rule of (pure) allophony.

But from the argument against bi-uniqueness in Sound Pattern of Russian (Halle 1959), it follows that we should reject a grammar-internal distinction between rules of alternation and allophony, and subsequent theory provides no way to encode this distinction in the grammar. Similarly, it is not hard to define what it means for a rule to be surface-true.

(4) [ditto] If no instances of CAD are generated by the grammar G, then the rule is surface-true.

But, there does not seem to be much reason for that notion to be encoded in the grammar and the theory does not provide any way to encode it.1 Note further that I am also deliberately stating in (1) that a constraint against CAD has been “projected” from the alternation, rather than treating such constraints as autonomous entities of the theory as is done in Optimality Theory (OT) and friends. Finally, I have phrased this in terms of grammaticality (“are ill-formed”) rather than acceptability.

Why might the Alternation Phonotactic Hypothesis (henceforth, APH) be true? First, I take it as obvious that alternations are more entrenched facts about grammars than pure allophony. For instance, in English, stop aspiration could be governed by a rule of allophony, but it is also plausible that English speakers simply represent aspirated stops as such in their lexical entries since there are no aspiration alternations. This point was made separately by Dell (1973) and Stampe (1973), and motivates the notion of lexicon optimization in OT. In contrast, though, rules of alternation (or someting like them) are actually necessary to obtain the proper surface forms. An English speaker who does not have a rule of obstruent voice assimilation will simply not produce the right allomorphs of various affixes. In contrast, the same speaker need not encode a process of nasalization—which in English is clearly allophonic (see, e.g., Kager 1999: 31f.)—to obtain the correct outputs. Given that alternations are entrenched in the relevant sense, it is not impossible to imagine that speakers might “project” constraints out of alternation generalizations in the manner described above. Such constraints could be used during online processing, assuming a strong isomorphism between grammatical representations used during production and perception.2 Secondly, since not all alternations are surface-true, it seems reasonable to limit this process of projection to those which are. Were one to project non-surface-true constraints in this fashion, the speaker would find themselves in an awkward position in which actual words are ill-formed.3,4

The APH is interesting contrasted with the following:

(5) Lexicostatistic Phonotactic Hypothesis: Let A, C, and be (possibly-null) string sets. Then, if CAD is statistically underrepresented (in a sense to be determined) in the lexicon L of a grammar G, nonce words containing the subsequence CAD are ill-formed for speakers of G. 

According to the LSPH (as we’ll call it), phonotactic knowledge is projected not from alternations but from statistical analysis of the lexicon. The LSPH is at least implicit in the robust cottage industry which uses statistical and/or computational modeling of the lexicon to infer the existence of phonotactic generalizations. It is notable how virtually none of the “cottage industry” of LSPH work discusses anything like the APH. Finally, one should note that APH and the LSPH do not exhaust the set of possibilities. For instance, Berent et al. (2007) and Daland et al. (2011) test for effects of the Sonority Sequencing Principle, a putative linguistic universal, on wordlikeness judgments. And some have denied the mere existence of phonotactic constraints.

Gorman 2013 reviews some prior results which argue in favor of the APH, which I’ll describe below.

Consider the putative English phonotactic constraint *V̄ʃ#, a constraint against word-final sequences of tense vowels followed by [ʃ] proposed by Iverson & Salmons (2005). Exceptions to this generalization tend to be markedly foreign (e.g., cartouche), to be proper names (e.g., LaRouche), or to convey an “affective, onomatopoeic quality” (e.g., sheeshwoosh). As Gorman (2013:43f.) notes, this constraint is statistically robust, but Hayes & White (2013) report that it has no measurable effect on English speakers’ wordlikeness judgments. In contrast, three English alternation rules  (nasal place assimilation, obstruent voice assimilation, and degemination) have a substantial impact on wordlikeness judgments (Gorman 2013, ch. 4).

A secod, more elaborate example comes from Turkish. Lees (1966a,b) proposes three phonotactic constraints in this language: backness harmony, roundness harmony, and labial attraction. All three of these constraints have exceptions, but Gorman (p. 57-60) shows that they are statistically robust generalizations. Thus, under the LSPH, speakers ought to be sensitive to all three.

Endnotes

  1. I note that the CONTROL module proposed by Orgun & Sprouse (1999) might be a mechanism by which this information could be encoded.
  2. Some evidence that phonotactic knowledge is deployed in production comes from the study of Finnish and Turkish, both of which have robust vowel harmony. Suomi et al. (1997) and Vroomen et al. (1998) find that disharmony seemingly acts as a cue for word boundaries in Finnish, and Kabak et al. (2010) find something similar for Turkish, but not in French, which lacks harmony.
  3. Durvasula & Kahng (2019) find that speakers do not necessarily judge a nonce word to be ill-formed just because it fails to follow certain subtle allophonic generalizations, which suggests that the distinction between allophony and alternation may be important here.
  4. I note that it has sometimes been proposed that actual words of G may in fact be gradiently marked or otherwise degraded w.r.t. to grammar G if they violate phonotactic constraints projected from G (e.g., Coetzee 2008). However, the null hypothesis, it seems to me, is that all actual words are also possible words and so it does not make sense to speak of actual words as marked or ill-formed, gradiently or otherwise.

References

Berent, I., Steriade, D., Lennertz, T., and Vaknin, V. 2007. What we know about what we have never heard: evidence from perceptual illusions. Cognition 104: 591-630.
Coetzee, A. W. 2008. Grammaticality and ungrammaticality in phonology. Language 64(2): 218-257. [I critique this briefly in Gorman 2013, p. 4f.]
Daland, R., Hayes, B., White, J., Garellek, M., Davis, A., and Norrmann, I. 2011. Explaining sonority projection effects. Phonology 28: 197-234.
Dell, F. 1973. Les règles et les sons. Hermann.
Durvasula, K. and Kahng, J. 2019. Phonological acceptability is not isomorphic with phonological grammaticality of stimulus. Talk presented at the Annual Meeting on Phonology.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Halle, M. 1959. Sound Pattern of Russian. Mouton.
Hayes, B. and White, J. 2013. Phonological naturalness and phonotactic learning. Linguistic Inquiry 44: 45-75.
Iverson, G. K. and Salmons, J. C. 2005. Filling the gap: English tense vowel plus final
/š/. Journal of English Linguistics 33: 1-15.
Kager, R. 1999. Optimality Theory. Cambridge University Press.
Orgun, C. O. and Sprouse, R. 1999. From MPARSE to CONTROL: deriving ungrammaticality. Phonology 16: 191-224.
Kabak, B., Maniwa, K., and Kazanina, N. 2010. Listeners use vowel harmony and word-final stress to spot nonsense words: a study of Turkish and French. Journal of Laboratory Phonology 1: 207-224.
Lees, R. B. 1966a. On the interpretation of a Turkish vowel alternation. Anthropological Linguistics 8: 32-39.
Lees, R. B. 1966b. Turkish harmony and the description of assimilation. Türk Dili
Araştırmaları Yıllığı Belletene 1966: 279-297
Stampe, D. 1973. A Dissertation on Natural Phonology. Garland. [I don’t have this in front of me but if I remember correctly, Stampe argues non-surface true phonological rules are essentially second-class citizens.]
Suomi, K. McQueen, J. M., and Cutler, A. 1997. Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language 36: 422-444.
Vroomen, J., Tuomainen, J. and de Gelder, B. 1998. The roles of word stress and vowel harmony in speech segmentation. Journal of Memory and Language 38: 133-149.

Anatomy of an analogy

I have posted a lightly-revised version of the handout of a talk I gave at Stony Brook University last November here on LingBuzz. In it, I argue that analogical leveling phenomena in Latin previously attributed to pressures against interparadigmatic analogy or towards phonological process overapplication are better understood as the result of Neogrammarian sound change, loss of productivy, and finally covert reanalysis.

What phonotactics-free phonology is not

In my previous post, I showed how many phonological arguments are implicitly phonotactic in nature, using the analysis of the Latin labiovelars as an example. If we instead adopt a restricted view of phonotactics as derived from phonological processes, as I argue for in Gorman 2013, what specific forms of argumentation must we reject? I discern two such types:

  1. Arguments from the distribution of phonemes in URs. Early generative phonologists posited sequence structure constraints, constraints on sequences found in URs (e.g, Stanley 1967, et seq.). This seems to reflect more the then-contemporary mania for information theory and lexical compression, ideas which appear to have lead nowhere and which were abandoned not long after. Modern forms of this argument may use probabilistic constraints instead of categorical ones, but the same critiques remain. It has never been articulated why these constraints, whether categorical or probabilistic, are considered key acquirenda. I.e., why would speakers bother to track these constraints, given that they simply recapitulate information already present in the lexicon. Furthermore, as I noted in the previous post, it is clear that some of these generalizations are apparent even to non-speakers of the language; for example, monolingual New Zealand English speakers have a surprisingly good handle on Māori phonotactics despite knowing few if any Māori words. Finally, as discussed elsewhere (Gorman 2013: ch. 3, Gorman 2014), some statistically robust sequence structure constraints appear to have little if any effect on speakers judgments of nonce word well-formedness, loanword adaptation, or the direction of language change.
  2. Arguments based on the distribution of SRs not derived from neutralizing alternations. Some early generative phonologists also posited surface-based constraints (e.g., Shibatani 1973). These were posited to account for supposed knowledge of “wordlikeness” that could not be explained on the basis of constraints on URs. One example is that of German, which has across-the-board word-final devoicing of obstruents, but which clearly permits underlying root-final voiced obstruents in free stems (e.g., [gʀaːt]-[gʀaːdɘ] ‘degree(s)’ from /grad/). In such a language, Shibatani claims, a nonce word with a word-final voiced obstruent would be judged un-wordlike. Two points should be made here. First, the surface constraint in question derives directly from a neutralizing phonological process. Constraint-based theories which separate “disease” and “cure” posit a  constraint against word-final obstruents, but in procedural/rule-based theories there is no reason to reify this generalization, which after all is a mere recapitulation of the facts of alternation, arguably more a more entrenched source of evidence for grammar construction. Secondly, Shibatani did not in fact validate his claim about German speakers’ in any systematic fashion. Some recent work by Durvasula & Kahng (2019) reports that speakers do not necessarily judge a nonce word to be ill-formed just because it fails to follow certain subtle allophonic principles.

References

Durvasula, K. and Kahng, J. 2019. Phonological acceptability is not isomorphic with phonological grammaticality of stimulus. Talk presented at the Annual Meeting on Phonology.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Gorman, K. 2014.  A program for phonotactic theory. In Proceedings of the Forty-Seventh Annual Meeting of the Chicago Linguistic Society: The Main Session, pages 79-93.
Shibatani, M. 1973. The role of surface phonetic constraints in generative phonology. Language 49(1): 87-106.
Stanley, R. 1967. Redundancy rules in phonology. Language 43(2): 393-436.

Towards a phonotactics-free phonology

Early generative phonology had surprisingly little to say about the theory of phonotactics. Chomsky and Halle (1965) claim that English speakers can easily distinguish between real words like brick, well-formed or “possible” nonce words like blick, and ill-formed or “impossible” nonce words like bnick. Such knowledge must be in part language-specific, since, for instance, [bn] onsets are in some languages—Hebrew for instance—totally unobjectionable. But few attempts were made at the time to figure out how to encode this knowledge.

Chomsky and Halle, and later Stanley (1967), propose sequence structure constraints (SSCs), generalizations which encode sequential redundancies in underlying representations.1 Chomsky and Halle (p. 100) hypothesize that such generalizations might account for the ill-formedness of bnick: perhaps English consonants preceded by a word-initial obstruent must be liquids: thus blick but not bnick. Shibatani (1973) claims that not all language-specific generalizations about (im)possible words can derive from restrictions on underlying representations and must (instead or also) be expressed in terms of restrictions on surface form. For instance, in German, obstruent voicing is contrastive but neutralized word-finally; e.g., [gʀaːt]-[gʀaːtɘ] ‘ridge(s) vs. [gʀaːt]-[gʀaːdɘ] ‘degree(s)’. Yet, Shibatani claims that German speakers supposedly judge word-final  voiced obstruents, as in the hypothetical but unattested [gʀaːd], to be ill-formed. Similar claims were made by Clayton (1976). And that roughly exhausts the debate at the time. Many years later, Hale and Reiss can, for instance, deny that that this kind of knowledge is part of the narrow faculty of language.

Even if we, as linguists, find some generalizations in our description of the lexicon, there is no reason to posit these generalizations as part of the speaker’s knowledge of their language, since they are computationally inert and thus irrelevant to the input-output mappings that the grammar is responsible for. (Hale and Reiss 2008:17f.)

Many years later, Charles Reiss (p.c.) proposed to me a brief thought experiment. Imagine that you were to ask a naïve non-linguist monolingual English speaker to discern whether a short snippet of spoken language was either, say, Māori or Czech. Would you not expect that such a speaker would do far better than chance, even if they themselves do not know a single word in either language? Clearly then, (at least some form of) phonotactic knowledge can be acquired extremely indirectly, effortlessly, without any substantial exposure to the language, and does not imply any deep knowledge of the grammar(s) in question.2

In a broader historical context, though, early generativists’ relative disinterest in phonotactic theory is something of a historical anomaly. Structuralist phonologists, in developing phonemicizations, were at least sometimes concerned with positing phonemes that have a restricted distribution. And for phonologists working in strains of thinking that ultimately spawned Harmonic Grammar and Optimality Theory, phonotactic generalizations are to a considerable degree what phonological grammars are made of.

A phonological theory which rejects phonotactics as part of the narrow language faculty—as do Hale and Reiss—is one which makes different predictions than theories which do include it, if only because such an assumption necessarily excludes certain sources of evidence. Such a grammar cannot make reference to generalizations about distributions of phonemes that are not tied to allophonic principles or to alternations. Nor can it make reference to the distribution of contrast except in the presence of neutralizing phonological processes.

I illustrated this point very briefly in Gorman 2014 with a famous case from Sanskrit (the so-called diaspirate roots); here I’d like to provide more detailed example using a language I know much better, namely Latin. Anticipating the conclusions drawn below, it seems that nearly all the arguments mustered in this well-known case are phonotactic in nature and are irrelevant in a phonotactics-free theory of phonology.

In Classical Latin, the orthographic sequence qu (or more specifically <QV>) denotes the sound [kw].Similarly, gu is ambiguously either [gu] as in exiguus [ek.si.gu.us] ‘strict’ or [gw] as in anguis [aŋ.gwis] ‘snake’. For whatever reason, it seems that is gu was pronounced as [gw] if and only if it is preceded by an n. It is not at all clear if this should be regarded as an orthographic generalization, a phonological principle, or a mere accident of history.

How should the labiovelars qu and (post-nasal) gu be phonologized? This topic has been the subject of much speculation. Devine and Stephens (1977) devoted half a lengthy book to the topic, for instance. More recently, Cser’s (2020: 22f.) phonology of Latin reconsiders the evidence, revising an earlier presentation (Cser 2013) of these facts. In fact three possibilities are imaginable: qu, for instance, could be unisegmental /kʷ/, bisegmental /kw/, or even /ku/ (Watbled 2005), though as Cser correctly observes, the latter does not seem to be workable. Cser reluctantly concludes that the question is not yet decidable. Let us consider this question briefly, departing from Cser’s theorizing only in the assumption of a phonotactics-free phonology.

  1. Frequency. Following Devine and Stephens, Cser notes that the lexical frequency of qu greatly exceeds that of k and glide [w] (written u) in general. They take this as evidence for unisegmental /kʷ, gʷ/. However, it is not at all clear to me why this ought to matter to the child acquiring Latin. In a phonotactics-free phonology, there is no simply reason for the learner to attend to this statistical discrepancy. 
  2. Phonetic issuesCser reviews testimonia from ancient grammarians suggesting that the “[w] element in <qu> was less consonant-like than other [w]s” (p. 23). However, as he points out, this is trivially handled in the unisegmental analysis and is a trivial example of allophony in the bisegmental analysis. 
  3. Geminates. Cser points out that the labiovelars, unlike all consonants but [w], fail to form intervocalic geminates. However, phonotactics-free phonology has no need to explain which underlying geminates are and are not allowed in the lexicon.
  4. Positional restrictions. Under a bisegmental interpretation, the labiovelars are “marked” in that obstruent-glide sequences are rare in Latin. On the other hand, under a unisegmental interpretation, the absence of word-final labiovelars is unexpected. However, both of thes observations have no status in phonotactics-free phonology.
  5. The question of [sw]. The sequence [sw] is attested initially in a few words (e.g., suāuis ‘sweet’). Is [sw] uni- or bisegmental?  Cser notes that were one to adopt a unisegmental analysis for the labiovelars qu and gu, [sw] is the only complex onset in which [w] may occur. However, an apparently restricted distribution for [w] has no evidentiary status in phonotactics-free phonology; it can only be a historical accident encoded implicitly in the lexicon.
  6. Verb root structure. Devine and Stephens claim that verb roots ending in a three-consonant sequence are unattested except for roots ending in a sonorant-labiovelar sequence (e.g., torquere ‘to turn’, tinguere ‘to dip’). While this is unexplained under a bisegmental analysis, this is an argument based on distributional restrictions that have no status in phonotactics-free phonology. 
  7. Voicing contrast in clusters. Voicing is contrastive in Latin nasal-labiovelar clusters, thus linquam ‘I will/would leave’ (1sg. fut./subj. act.) linguam ‘tongue’ (acc.sg.). According to Cser, under the biphonemic analysis this would be the only context in which a CCC cluster has contrastive voicing, and “[t]his is certainly a fact that points towards the greater plausibility of the unisegmental interpretation of labiovelars” (p. 27). It is is not clear that the distribution of voicing contrasts ought to be taken into account in a phonotactics-free theory, since there is no evidence for a process neutralizing voicing contrasts in word-internal trisegmental clusters.
  8. Alternations. In two verbs, qu alternates with cū [kuː] in the perfect participle (ppl.): loquī ‘to speak’ vs. its ppl. locūtus and sequī ‘to follow’ vs. its ppl. secūtus. Superficially this resembles alternations in which [lv, bv, gv] alternate with [luː, buː, guː] in the perfect participle. This suggests a bisegmental analysis, and since this is based on patterns of alternation, is consistent with a phonotactics-free theory. On the other hand, qu also alternates with plain c [k]. For example, consider the verb coquere ‘to cook’, which has a past participle coctus. Similarly, the verb relinquere ‘to leave’ has a perfect participle relictus, but the loss of the Indo-European “nasal insert” (as it is known) found in the infinitive may suggest an alternative—possibly suppletive—analysis. Cser concludes, and I agree, that this evidence is ambiguous.
  9. ad-assimilation. The prefix ad- variably assimilates in place and manner to the following stem-initial consonant. Cser claims that this is rare with qu-initial stems (e.g., unassimilated adquirere ‘to acquire’ is far more frequent than assimilated acquirere in the corpus). This is suggestive of a bisegmental analysis insofar as ad-assimilation is extremely common with [k]-initial stems. This seems to weakly supports the bisegmental analysis.5
  10. Diachronic considerations. Latin qu is a descendent of the Indo-European *kʷ, one member of a larger labiovelar series. All members of this series appear to be unisegmental in the proto-language. However, as Cser notes, this is simply not relevant for the synchronic status of qu and gu.
  11. Poetic licence. Rarely the poets used a device known as diaeresis, the reading of [w] as [u] to make the meter. Cser claims this does not obtain for qu. This is weak evidence for the unisegmental analysis because the labial-glide portion of /kʷ/ would not obviously be in the scope of diaeresis.
  12. The distribution of gu. As noted above the voiced labiovelar gu is lexically quite rare, and always preceded by n. In a phonological theory which attends to phonotactic constraints, this is an explanandum crying out for an explanans. Cser argues that it is particularly odd under the unisegmental analysis because there is no other segment so restricted. But in phonotactics-free phonology, there is no need to explain this accident of history.

Cser concludes that this series of arguments are largely inconclusive. He takes (7, 11) to be evidence for the unisegmental analysis, (3, 5, 8, 9) to be evidence for the bisegmental analysis, and all other points to be largely inconclusive. Reassessing the evidence in a phonotactics-free theory, only (9) and (11), both based on rather rare evidence, remain as possible arguments for the status of the labiovelars. I too have to regard the evidence as inconclusive, though I am now on the lookout for diaeresis of qu and gu, and hope to obtain a better understanding of prefix-final consonant assimilation.

Clearly, working phonologists are heavily dependent on phonotactic arguments, and rejecting them as explanations would substantially limit the evidence base used in phonological inquiry.

Endnotes

  1. In part this must reflect the obsession with information theory in linguistics at the time. Of this obsession Halle (1975) would later write that this general approach was “of absolutely no use to anyone working on problems in linguistics” (532).
  2. As it happens, monolingual English-speaking New Zealanders are roughly as good at discriminating between “possible” and “impossible” Māori nonce words as are Māori speakers (Oh et al. 2020).
  3. I write this phonetically as [kw] rather than [kʷ] because it is unclear to me how the latter might differ phonetically from the former. These objections do not apply to the phonological transcription /kʷ/, however.
  4. Recently Gouskova and Stanton (2021) have revived this theory and applied it to a number of case studies in other languages. 
  5. It is at least possible that that unassimilated spellings are “conservative” spelling conventions and do not reflect speech. If so, one may still wish to explain the substantial discrepency in rates of (orthographic) assimilation to different stem-initial consonants and consonant clusters. 

References

Chomsky, N. and Halle, M. 1965. Some controversial questions in phonological theory. Journal of Linguistics 1(2): 97-138.
Clayton, M. L. 1976. The redundance of underlying morpheme-structure conditions. Language 52(2): 295-313.
Cser, A. 2013. Segmental identity and the issue of complex segments. Acta Linguistica Hungarica 60(3): 247-264.
Cser, A. 2020. The Phonology of Classical Latin. John Wiley & Sons.
Devine, A. M. and Stephens, L. D. 1977. Two Studies in Latin Phonology. Anma Libri.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Gorman, K. 2014. A program for phonotactic theory. In Proceedings of the Forty-Seventh Annual Meeting of the Chicago Linguistic Society: The Main Session, pages 79-93.
Gouskova, M. and Stanton, Juliet. 2021. Learning complex segments. Language 97(1):151-193.
Hale, M. and Reiss, C. 2008. The Phonological Enterprise. Oxford University Press.
Halle, M. 1975. Confessio grammatici. Language 51(3): 525-535.
Oh, Y., Simon, T., Beckner, C., Hay, J., King, J., and Needle, J. 2020. Non-Māori-speaking New Zealanders have a Māori proto-lexicon. Scientific Reports 10: 22318.
Shibatani, M. 1973. The role of surface phonetic constraints in generative phonology. Language 49(1): 87-106.
Stanley, R. 1967. Redundancy rules in phonology. Language 43(2): 393-436.
Watbled, J.-P. 2005. Théories phonologiques et questions de phonologie latine. In C. Touratier (ed.), Essais de phonologie latine, pages 25-57. Publications de l’Université de Provence.

Thought experiment #2

In an earlier post, I argued that for the logical necessity of admitting some kind of “magic” to account for lexically arbitrary behaviors like Romance metaphony or Slavic yers. In this post I’d like to briefly consider the consequences for the theory of language acquisition.

If mature adult representations have magic, infants’ hypothesis space must also include the possibility of positing magical URs (as Jim Harris argues for Spanish or Jerzy Rubach argues for Polish). What might happen the hypothesis space was not so specified? Consider the following thought experiment:

The Rigelians from Thought Experiment #1 did not do a good job sterilizing their space ships. (They normally just lick the flying saucer real good.) Specks of Rigelian dust carry a retrovirus that infects human infants and modifies their their faculty of language so that they no longer entertain magical analyses.

What then do we suppose might happen to Spanish and Polish patterns we previously identified as instances of magic? Initially, the primary linguistic data will not have changed, just the acquisitional hypothesis space. What kind of grammar will infected Spanish-acquiring babies acquire?

For Harris (and Rubach), the answer must be that infected babies cannot acquire the metaphonic patterns present in the PLD. Since there is reason to think (see, e.g., Gorman & Yang 2019:§3) that the diphthongization is the minority pattern in Spanish, it seems most likely that the children will acquire a novel grammar in which negar ‘to deny’ has an innovative non-alternating first person singular indicative *nego rather than niego ‘I deny’.

Not all linguists agree. For instance, Bybee & Pardo (1981; henceforth BP) claim that there is some local segmental conditioning on diphthongization, in the sense that Spanish speakers may be able to partially predict whether or not a stem diphthongizes on the basis of nearby segments.1 Similarly, Albright, Andrade, & Hayes (2001; henceforth AAH) develop a computational model which can extract generalizations of this sort.2 For instance, BP claim that an e followed by __r, __nt, or __rt are more likely to diphthongize, and AAH claim that a following stem-final __rr (the alveolar trill [r], not the alveolar tap [ɾ]) and a following __mb also favor diphthongization. BP are somewhat fuzzy about the representational status of these generalizations, but for AAH, who reject the magical segment analysis, they are expressed by a series of competing rules.

I am not yet convinced by this proposal. Neither BP nor AAH give the reader any general sense of the coverage of the segmental generalizations they propose (or in the case of AAH, that their computational model discovers): I’d like to know basic statistics like precision and recall for existing words. Furthermore, AAH note that their computational model sometimes needs to fall back on “word-specific rules” (their term), rules in which the segmental conditioning is an entire stem, and I’d like to know how often this is necessary.3 Rather than reporting coverage, BP and AAH instead correlate their generalizations with the results of wug-tasks (i.e., nonce word production tasks) by Spanish-speaking adults. The obvious objection here is that no evidenceor even an explicit linking hypothesislinks adults’ generalizations about nonce words in a lab to childrens’ generalizations about novel words in more naturalistic settings.

However, I want to extend an olive branch to linguists who are otherwise inclined to agree with BP and AAH. It is entirely possible that children do use local segmental conditioning to learn the patterns linguists analyzed with magical segments and/or morphs, even if we continue to posit magic segments or morphs. It is even possible that sensitivity to this segmental conditioning persists into adulthood as reflected in the aforementioned wug-tasks. Local segmental conditioning might be an example of domain-general pattern learning, and might be likened to sound symbolism—such as the well-known statistical tendency for English words beginning in gl– to relate to “light, vision, or brightness” (Charles Yang, p.c.)insofar as both types of patterns reduce apparent arbitrariness of the lexicon. I am also tempted to identify both local segmental conditioning and sound symbolism as examples of third factor effect (in the sense of Chomsky 2005). Chomsky identifies three factors in the design of language: the genetic endowment, “experience” (the primary linguistic data), and finally “[p]rinciples not specific to the faculty of language”. Some examples of third factorsas these principles not specific to the faculty of language are calledgiven in the paper include domain-general principles of “data processing” or “data analysis” and biological constraints, whether “architectural”, “computational”, or “developmental”. I submit that general-purpose pattern learning might be an example of of domain-general “data analysis”.

As it happens, we do have one way to probe the coverage of local segmental conditioning. Modern sequence-to-sequence neural networks, arguably the most powerful domain-general string pattern learning tool known to us, have been used for morphological generation tasks. For instance, in the CoNLL-SIGMORPHON 2017 shared task, neural networks are used to predict the inflected form of various words given some citation form  and a morphological specification. For instance, given the pair (dentar, V;IND;PRS;1;SG) the models have to predict diento ‘I am teething’. Very briefly, these models, as currently designed, are much like babies infected with the Rigelian retrovirus: their hypothesis space does not include “magic” segments or lexical diacritics and they must rely solely on local segmental conditioning. It is perhaps not surprising, then, that they misapply diphthongization in Spanish (e.g., *recolan for recuelan ‘they re-strain’; Gorman et al. 2019) or yer deletion in Polish, when presented with previously unseen lemmata. But it is an open question how closely these errors pattern like those made by children, or with adults’ behaviors in wug™-tasks.

Acknowledgments

I thank Charles Yang for drawing my attention to some of the issues discussed above.

Endnotes

  1. Similarly, Rysling (2016) argues that Polish yers are epenthesized to avoid certain branching codas, though she admits that their appearance is governed in part by magic (according to her analysis, exceptional morphs of the Gouskova/Pater variety).
  2. Later versions of this model developed by Albright and colleagues are better known for popularizing the notion of “islands of reliability”.
  3. Bill Idsardi (p.c.) raises the question of whether magical URs and morpholexical rules are extensionally equivalent. Good question.

References

Albright, A., Andrade, A., and Hayes, B. 2001. Segmental environments of Spanish diphthongization. UCLA Working Papers in Linguistics 7: 117-151.
Bybee, J., and Pardo, E. 1981. Morphological and lexical conditioning of rules: experimental evidence from Spanish. Linguistics 19: 937-968.
Chomsky, N. 2005. Three factors in language design. Linguistic Inquiry 36(1): 1-22.
Gorman, K. and Yang, C. 2019. When nobody wins. In Franz Rainer, Francesco Gardani, Hans Christian Luschützky and Wolfgang U. Dressler (ed.), Competition in inflection and word formation, pages 169-193. Springer.
Gorman, K., McCarthy, A.D., Cotterell, R., Vylomova, E., Silfverberg, M., Markowska, M. 2019. Weird inflects but okay: making sense of morphological generation errors. In Proceedings of the 23rd Conference on Computational Natural Language Learning, pages 140-151.
Rysling, A. 2016. Polish yers revisited. Catalan Journal of Linguistics 15: 121-143.

Thought experiment #1

A non-trivial portion of what we know about the languages we speak includes information about lexically-arbitrary behaviors, behaviors that are specific to certain roots and/or segments and absent in other superficially-similar roots and/or segments. One of the earliest examples is the failure of English words like obesity to undergo Chomsky & Halle’s (1968: 181) rule of trisyllabic shortening: compare sereneserenity to obese-obesity (Halle 1973: 4f.). Such phenomena are very common in the world’s languages. Some of the well-known examples include Romance mid-vowel metaphony and the Slavic fleeting vowels, which delete in certain phonological contexts.1

Linguists have long claimed (e.g., Harris 1969) one cannot predict whether a Spanish e or o in the final syllable of a verb stem will or will not undergo diphthongization (to ie or ue, respectively) when stress falls on the stem rather than the desinence. For instance negar ‘to deny’ diphthongizes (niego ‘I deny’, *nego) whereas the superficially similar pegar ‘to stick to s.t.’ does not (pego ‘I stick to s.t.’, *piego). There is no reason to suspect that the preceding segment (n vs. p) has anything to do with it; the Spanish speaker simply needs to memorize which mid vowels diphthongize.2 The same is arguably true of the Polish fleeting vowels known as yers, which delete in, among other contexts, the genitive singular (gen.sg.) of masculine nouns. Thus sen ‘dream’ has a gen.sg. snu, with deletion of the internal e, whereas the superficially similar basen ‘pool’ has a gen.sg. basenu, retaining the internal (Rubach 2016: 421). Once again, the Polish speaker needs to memorize whether or not each deletes.

So as to not presuppose a particular analysis, I will refer to segments with these unpredictable alternations—diphthongization in Spanish, deletion in Polish—as magical. Exactly how this magic ought to be encoded is unclear.3 One early approach was to exploit the feature system so that they were underlyingly distinct from non-magical segments. These “exploits” might include mapping magical segments onto gaps in the surface segmental inventory, underspecification, or simply introducing new features. Nowadays, phonologists are more likely to use prosodic prespecification. For instance, Rubach (1986) proposes that the Polish yers are prosodically defective compared to non-alternating e.4 Others have claimed that magic resides in the morph, not the segment.

Regardless of how the magic is encoded, it is a deductive necessity that it be encoded somehow. Clearly something is representationally different in negar and pegar, and sen and basen. Any account which discounts this will be descriptively inadequate. To make this a bit clearer, consider the following thought experiment:

We are contacted by a benign, intelligent alien race, carbon-based lifeforms from the Rigel system with feliform physical morphology and a fondness for catnip. Our scientists observe that they exhibit a strange behavior: when they imbibe fountain soda, their normally-green eyes turn yellow, and when they imbibe soda from a can, their eyes turn red. Scientists have not yet been able to determine the mechanisms underlying these behaviors.

What might we reason about the alien’s seemingly magical soda sense? If we adopt a sort of vulgar uniformitarianism—one which rejects outlandish explanation like time travel or mind-reading—then the only possible explanation remaining to us is that there really is something chemically distinct between the two classes of soda, and the Rigelian sensory system is sensitive to this difference.

Really, this deduction isn’t so different from the one made by linguists like Harris and Rubach: both observe different behaviors and posit distinct entities to explain them. Of course, there is something ontologically different between the two types of soda and the two types of Polish e. The former is a purely chemical difference; the latter arises  because the human language faculty turns primary linguistic data, through the epistemic process we call first language acquisition, into one type of meat (brain tissue), and that type of meat makes another type of meat (the articulatory apparatus) behave in a way that, all else held equal, will recapitulate the primary linguistic data. But both of these deductions are equally valid.

Endnotes

  1. Broadly-similar phenomena previously studied include fleeting vowels in Finnish, Hungarian, Turkish, and Yine, ternary voice contrasts in Turkish, possessive formation in Huichol, and passive formation in Māori.
  2. For simplicity I put aside the arguments by Pater (2009) and Gouskova (2012) that morphs, not segments, are magical. While I am not yet convinced by their arguments, everything I have to say here is broadly consistent with their proposal.
  3. This is yet another feature of language that is difficult to falsify. But as Ollie Sayeed once quipped, the language faculty did not evolve to satisfy a vulgar Popperian falsificationism.
  4. Specfically, Rubach assumes that the non-alternating e‘s have a prespecified mora, whereas the alternating e‘s do not.

References

Chomsky, N. and Halle, M. 1968. The Sound Pattern of English. Harper & Row.
Gouskova, M. 2012. Unexceptional segments. Natural Language & Linguistic Theory 30: 79-133.
Halle, M. 1973. Prolegomena to a theory of word formation. Linguistic Inquiry 4: 3-16.
Harris, J. 1969. Spanish Phonology. MIT Press.
Pater, J. 2009. Morpheme-specific phonology: constraint indexation and inconsistency resolution. In S. Parker (ed.), Phonological Argumentation: Essays on Evidence and Motivation, pages 123-154. Equinox.
Rubach, J. 1986. Abstract vowels in three-dimensional phonology: the yers. The Linguistic Review 5: 247-280.
Rubach, J. 2016. Polish yers: Representation and analysis. Journal of Linguistics 52: 421-466.

Asymmetries in Latin glide formation

Let us assume, as I have in the past, that the Classical Latin glides [j, w] are allophones of the short high monophthongs /i, u/. Then, any analysis of this allophony must address the following four asymmetries between [j] and [w]:

  1. Intervocalical /i/ is [j.j], as in peior [pej.jor] ‘worse’; intervocalic /u/ is simple.
  2. Intervocalically, /iu/ is realized as [jw], as in laeua [laj.wa] ‘left, leftwards’ (fem. nom.sg.), but /ui/ is realized as [wi], as in pauiō [pa.wi.oː] ‘I beat’.
  3. /u/ preceded by a liquid and followed by a vowel is also realized as [w], as in ceruus [ker.wus] and silua [sil.wa] ‘forest’, but /i/ is never realized as a glide in this position.
  4. There are two cases in which [u] alternates with [w] (the deadjectival suffix /-u-/ is realized as /-w-/ when preceded by a liquid, as in caluus [cal.wus] ‘bald’, and the perfect suffix /-u-/ is realized as /-w-/ in “thematic” stems like cupīuī [ku.piː.wiː] ‘I desired’); there are no alternations between [i] and [j].

What rules gives rise to these asymmetries?

A theory of error analysis

Manual error analyses can help to identify the strengths and weaknesses of computational systems, ultimately suggesting future improvements and guiding development. However, they are often treated as an afterthought or neglected altogether. In three of my recent papers, we have been slowly developing what might be called a theory of error analysis. The systems evaluated include:

  • number normalization (Gorman & Sproat 2016); e.g., mapping 97000 onto quatre vingt dix sept mille,
  • inflection generation (Gorman et al. 2019); e.g., mapping pairs citation form and inflectional specification like (aufbauen, V;IND;PRS;2) onto inflected forms like baust auf, and
  • grapheme-to-phoneme conversion (Lee et al. 2020); e.g., mapping orthographic forms like almohadilla onto phonemic or phonetic forms like /almoaˈdiʎa/ and [almoaˈðiʎa].

While these are rather different types of problems, the systems all have one thing in common: they generate linguistic representations. I discern three major classes of error such systems might make.

  • Target errors are only apparent errors; they arise when the gold data, the data to be predicted, is linguistically incorrect. This is particularly likely to arise with crowd-sourced data though such errors are also present in professionally annotated resources.
  • Linguistic errors are caused by misapplication of independently attested linguistic behaviors to the wrong input representations.
    • In the case of number normalization, these include using the wrong agreement affixes in Russian numbers; e.g., nom.sg. *семьдесят миллион for gen.sg. семьдесят миллионов ‘nine hundred million’ (Gorman & Sproat 2016:516)
    • In inflection generation, these are what Gorman et al. 2019 call allomorphy errors; e.g., for instance, overapplying ablaut to the Dutch weak verb printen ‘to print’ to produce a preterite *pront instead of printte (Gorman et al. 2019:144).
    • In grapheme-to-phoneme conversion, these include failures to apply allophonic rules; e,g, in Korean, 익명 ‘anonymity’ is incorrectly transcribed as [ikmjʌ̹ŋ] instead of [iŋmjʌ̹ŋ], reflecting a failure to apply a rule of obstruent nasalization not indicated in the highly abstract hangul orthography (Lee et al. under review).
  • Silly errors are those errors which cannot be analyzed as either target errors or linguistic errors. These have long been noted as a feature of neural network models (e.g., Pinker & Prince 1988, Sproat 1992:216f. for discussion of *membled) and occur even with modern neural network models.

I propose that this tripartite distinction is a natural starting point when building an error taxonomy for many other language technology tasks, namely those that can be understood as generating linguistic sequences.

References

K. Gorman, A. D. McCarthy, R. Cotterell, E. Vylomova, M. Silfverberg, and M. Markowska (2019). Weird inflects but OK: making sense of morphological generation errors. In CoNLL, 140-151.
K. Gorman and R. Sproat (2016). Minimally supervised number normalization. Transactions of the Association for Computational Linguistics 4: 507-519.
J. L. Lee, L. F.E. Ashby, M. E. Garza, Y. Lee-Sikka, S. Miller, A. Wong, A. D. McCarthy, and K. Gorman (under review). Massively multilingual pronunciation mining with WikiPron.
S. Pinker and A. Prince (1988). On language and connectionism: analysis of a parallel distributed processing model of language acquisition. Cognition 28(1–2):73–193.
R. Sproat (1992). Morphology and computation. Cambridge: MIT Press.

Is formal phonology in trouble?

I recently attended the 50th meeting of the North East Linguistics Society (NELS), which is not much of a society as a prestigious generative linguistics conference. In recognition of the golden jubilee, Paul Kiparsky gave a keynote in which he managed to reconstruct nearly all of the NELS 1 schedule, complete with at least one handout, from a talk by Anthony Kroch and Howard Lasnik. Back then, apparently, handouts were just examples: no prose.

In his talk, Paul showed a graph showing that phonology accounts for an increasingly small number of paper at NELS, and in fact the gap has actually gotten worse over the last few decades. Paul proposed something of an explanation: that the introduction of Optimality Theory (OT) and its rejection of “derivational” explanations has forever introduced a schism between phonology and other subareas, and that syntacticians and semanticists are simply uncomfortable with the non-derivational nature of modern phonological theorizing.

With all due respect, I do not find this explanation probable. As he admits, most OT theorizing (including his own) now actually rejects the earlier rejection of derivational explanations. And on the other hand, modern syntactic theories are a heady brew of derivational (phases, copy theory, etc.) and non-derivational (move α, uninterpretable feature matching, etc.) thinking. And finally it’s not really clear why the aesthetic preferences of syntacticians (if that’s all they are) should produce the data, i.e., fewer phonology papers at NELS.

But I do agree that OT is the elephant in the room, responsible for an enormous amount of fragmentation in phonological theorizing.

I would liken Prince & Smolensky’s “founding document” (1993) to Martin Luther’s Ninety-five Theses. Scholars believe that Luther wished to start a scholarly theological debate rather than a popular revolution, and I suspect the founders of OT were similarly surprised with the enormous impact their proposal had on the field. Luther’s magnificient heresy may have failed to move the Church in the directions he wished, but he is the father of hundreds if not thousands of Protestant sects, each with their own new and vibrant “heresies”. The founders of OT, I think, are similarly unable to put the cat back into the bag (if they wish to at all).

In my opinion, OT’s early rejection of derivationalism has been an enormous empirical failure, and the full-blown functionalistic-externalist thinking—one of the first post-OT heresies (let’s liken it to Calvinism)—is, in my opinion, ontologically incoherent. That said, I would encourage OT believers to try more theory-comparison. The article on “Christian denominations” in Diderot’s & d’Alembert’s Encyclopédie begins with the obviously insincere suggestion that someone ought to study which of the various Protestant sects is most likely to lead to salvation. But I would sincerely love to find out which variant of OT is in fact most optimal.

[Thanks to Charles Reiss for discussion.]