Why we armchair

In the last two years or so I have gradually transitioned away from experimental-behavioral and computational work towards a larger proportion of what used to be called “pencil-and-paper” research: the development of theories, formalisms, and analyses. (“Description” also is pencil-and-paper in the relevant sense, but I am not really trained as a descriptive linguist.) While there are several reasons for this, one is the rather poor state of science funding in US, which suggest that we may be entering a moneyball era for linguistics.

When describing or presenting my pencil-and-paper work on phonology and its interfaces with morphology (which, to be fair, is mostly done on my trusty desktop computer, and which is sometimes quantitative), a few colleagues have suggested that I ought to be doing fine-grained acoustic or articulatory phonetics instead. I find this suggestion vexing. Consider something like my analysis of Spanish “raising verbs” in Gorman & Reiss (in press), which in turn is used to illustrate a series of formal-theoretical proposals under the umbrella of the theory of Logical Phonology. What could phonetic analysis contribute to this discussion? It’s obvious that, e.g., p[i]do ‘I ask’ has the same surface vowel as in v[i]ivo ‘I live’ whereas p[e]dir ‘to ask’ has a different surface vowel, one that is the same as the surface vowel in sum[e]rgir ‘to submerge’.  There are of course are subtle differences betwene renditions and speakers, but there’s no reason to think those differences are relevant to the analysis of raising verbs. Anyone reading this is welcome to show that I’m wrong, but I for one think it’d be a waste of time to so much as check. 

Similarly, a few colleagues have suggested that I ought to be doing human subjects experiments to figure out how such things (i.e., Spanish raising verb alternations) work. I too find this vexing. Now one can do a wug-test, and people have, but it’s not really clear what one gains from this, since we don’t have an agreed-upon linking hypothesis. Indeed, the most likely hypothesis is that adults are using a mix of task models, some of which might be relevant to our account of raising verbs, but some of which surely aren’t. How any of this might link up to the relevant linguistic notions like underspecification, morphophonological rules, suppletion, etc.—whatever you think the relevant notions might be—is unknown. A colleague thought the task ought to be some kind of online processing experiment, exploiting an unspoken form of what we might call a “derivational theory of complexity”, a totally discredited idea from before I was born. Similar issues plague neuroimaging work. The sorts of things that we can instrument in the brain at present—single-neuron firing rates as measured by single-cell recordings, magnetic current in boxes of 50,000 or so neurons as measured by MEG, blood oxygen levels in boxes of million or so neurons as measured by fMRI,  the smeared electrical currents measured by EEG, and so on—simply do not match the “grain size” of the linguistic constructs we are interested in (Poeppel & Embick 2005): a single neuron is almost surely too small to store a “raising rule” (whatever sort of thing that is), and a fMRI voxel far, far too large.

I am happy to have phonetician, experimental linguist, and neurolinguist colleagues, I just think that it’s sort of their j-o-b to figure out how to translate interesting linguistic ideas into something their tools can test, and I somewhat resent the implication that I am leaving hanging any phonetic or experimental low-hanging fruit. In those rare cases where I myself have ideas that I think can be tested using phonetic analysis or human-subjects experiments with clear linking hypotheses, I do phonetic analysis or human-subjects experiments. Indeed, the august National Science Foundation has even funded some of these experiments. But most of the time, I don’t—we don’t—and so I theorize, formalize, or analyze instead. 

References

Gorman, K. and Reiss, C. In press. Metaphony in Substance-Free Logical Phonology. Phonology to appear.

Poeppel, D. and Embick, D. 2005. Defining the relation between linguistics and neuroscience. In Cutler, A. (ed.), Twenty-First Century Psycholinguistics: Four Cornerstones, pages 103-118. Routledge.

Italian palatalization

In his Phonology of Italian, Krämer (2009:§4.2.1) is interested in the productivity of velar palatlization before /i/-initial suffixes, such as the masculine noun plural /-i/. Palatalization obtains in, for example, in amico-amici [aˈmiːko, aˈmiːtʃi] ‘friend(s)’, but not in cuoco-cuochi [ˈkwɔːko, ˈkwɔːki] `cook(s)’. Krämer (henceforth K) further claims that non-palatalization has much higher type frequency.

K performs a small experiment in which ten adult native speakers are presented with nonce words in the singular and asked to complete sentence which requires them to form the /-i/ plural. Four subjects never palatalized; one palatalized all plurals; and five others produced a mix of the two strategies. Summarizing this result, Krämer (2012:125) concludes: “Thus, in Italian it is a personal decision whether velar palatalization is productive or not.”

I am not sure I agree. The most straightforward interpretation of this data, I think, is that the subjects used a mix of different task models. Some subjects may have been reasoning on whether palatalization is actually productive (a true “grammatical task model”), which for me means that the generalization is encoded (or not encoded, as seems more likely here) so as to apply to arbitrary words. Others may have been guessing based on form similarity to existing words (a “dictionary task model”), and others may have used a mix of the two strategies. It is perhaps not surprising that that adults can make use of the dictionary task model, because one can, with some conscious effort, think of phonemically or semantically related real words, and it’s easy to imagine deciding on whether or not to palatalize a nonce word based on the behavior of similar real words.

I think, unfortunately, that this is an unavoidable problem when wug-testing adults. I submit that the conscious analogizing abilities of adults are probably not relevant to questions of productivity and I think that because I don’t think that’s what productivity is. But I don’t know of any way to prevent adult participants from using a dictionary task model. Thus linguists and reviewers should be more skeptical about the utility of adult wug-tasks.

Schütze (2005) makes a similar point with what we might call wugrating tasks. In such tasks, speakers are asked to assign a wellformedness rating (e.g., on a Likert scale) to candidate inflected forms of nonce words. Arguably, this setting encourage speakers to adopt a highly permissive variant of the dictionary task model, which might be framed as asking “could such a word ever have such a plural?” An answer to such questions are often interesting to the linguist, but I think it quite distinct from the question of productivity that K and others wish to study.

References

Krämer, M. 2009. The Phonology of Italian. Oxford University Press.
Krämer, M. 2012. Underlying Representations. Cambridge University Press.
Schütze, Carson. 2005. Thinking about what we are asking speakers to do. In Kepser, Stephan and Reis, Marga (ed.), Linguistic Evidence: Empirical, Theoretical, and Computational Perspectives, pages 457-485. De Gruyter Mouton.

Two conjectures about exceptionality

Kisseberth’s (1970) theory of exceptionality is arguably one of the most expressive yet proposed. Roughly, Kisseberth proposes that for every rule R, every morpheme bears two equipollent features, one indicating whether the morpheme is a potential target (±R Target) and another indicating whether the morpheme is a potential trigger (±R Trigger). R then applies if and only if its structural description is met, when the target morpheme is +R target, and the trigger morpheme is +R Trigger.1

I conjecture that Inkelas and colleagues’ notion of inalterability as prespecification, as implemented in Logical Phonology (LP), completely eliminates the need for morphemic ±R Target. Rather, particular morphemes’ ability to undergo R can be encoded via underespecification of individual target segments in those morphemes, rendering the segments mutable to feature-filling processes in contrast to fully-specified inalterable segments.2 There are at least a few cases—e.g., Turkish ternary voice alternations (Inkelas & Orgun 1995) and k-deletion (Gorman & Reiss in press a), Polish yer deletion (Rubach 2013)—where it seems that target exceptionality cannot be expressed as a morpheme-level property, so we have good reason to prefer the “exceptional segments” of IAP/LP to “exceptional morphemes” with respect to targeting.3

LP generalizes the IAP notion from targets to triggers, using underspecification to render possible triggers quiescent in contrast to fully-specified catalytic segments (see, e.g., Gorman & Reiss in press b). However, I conjecture that a complete theory will still need rules which are triggered in the context of specific morphemes or morphosyntactic contexts.

For example, consider umlaut in Standard German. Umlaut targeting is implemented by leaving o and u (which mutate to ö [ø] and ü [y], respectively) underspecified for Back; some additional complexities are raised by umlauting au and (which mutate to äu [ɔʏ] and ä [ɛ], respectively). The primary umlaut rule is thus a unification rule which specifies these segments as -Back; separate rules fill in additional details for au and a.

Umlaut triggering is more complex. The triggers are particular suffixes: noun plurals in -er (e.g., Würmer ‘worms’), -e (Nüsse ‘nuts’), and zero (Mütter ‘mothers’), the diminutive -chen (Häuschen ‘little house’), comparatives and superlatives of adjectives (größerer ‘bigger’, am größten ‘biggest’), and 2nd/3rd singular present indicative (du fängst ‘you catch’, er fängt ‘s/he catches’), and a few others.  These suffixes have nothing in common morphosyntactically, and exclude related suffixes like noun plural -(e)n or diminutive -lein. And crucially, the triggering suffixes have no common segments on the surface. It is true that many of these suffixes once contained an *i, but many others never did, and Janda (1998) argues umlaut triggering had a morphemic characteristic in even the earliest written German. LP could of course posit these suffixes contain /i/-triggers which never surface—such a grammar is computable, and Bach & King (1970) try to make a proposal of this form work—but Gorman & Reiss (2025) suggest that such analyses are not considered by the language acquisition device (LAD).4  Thus we must admit the possibility that umlaut is triggered by specific morphemes, in line with Kisseberth’s ±R Trigger.

A counterexample to the first conjecture would involve some case where targeting must be a morphemic property—what such an example would look like, I don’t know—and a counterexample to the second conjecture would involve an argument that all apparent morphemic triggering is in fact computed within the narrow phonology.

Endnotes

  1.  One might imagine that some of these specifications are filled in by redundancy rules. For example, if R is productive (however that’s encoded…), maybe +R target and +R trigger are defaults but the opposite is true if a morpheme lacks the phonological or morphosyntactic properties needed to target and/or trigger R respectively. But Kisseberth doesn’t discuss this matter.
  2. In contrast, when R is a segment deletion rule, a segment targeted by R is fully-specified for reasons we discuss in Gorman & Reiss in press a.
  3. Of course, LP also assumes that children are epistemically bound to provide a narrow phonological analysis (like the IAP pattern), so this does not require further motivation.
  4. Gorman & Reiss (2025) specifically propose a LAD principle no wandering targets; to rule out the /i/-deletion analysis, one would want to generalize that principle from targets to triggers. I see no obstacles to doing so.

References

Bach, E. and King, R. D. 1970. Umlaut in Modern German. Glossa 4:3-21.
Gorman, Kyle and Reiss, Charles. 2025. How not to acquire exchange rules in Logical Phonology.
In Proceedings of the 2025 annual conference of the Canadian Linguistic Association.
Gorman, Kyle and Reiss, Charles. In press a. Natural class reasoning in segment deletion rules. Paper presented at the 56th annual meeting of the North East Linguistic Society, to appear in the proceeedings.
Gorman, Kyle and Reiss, Charles. In press b. Metaphony in Substance-Free Logical Phonology. Phonology to appear.
Inkelas, Sharon and Orgun, Cemil Orhan. 1995. Level ordering and economy in the lexical phonology of Turkish. Language 71: 763-793.
Janda, Richard D. 1998. German umlaut: Morpholexical all the way down from OHG to NHG (Two Stützepunkte for Romance metaphony). Rivista di Linguistica 10: 1563-232.
Kisseberth, Charles W. 1970. The treatment of exceptions. Papers in Linguistics 2: 44-58.
Rubach, Jerzy, 2013. Exceptional segments in Polish. Natural Language & Linguistic Theory 31: 1139-1163.

Get rich quick

Almost four years ago, I started working on an abstract with Charles Reiss arguing that the implicit “feature minimization” approach to specifying phonological rules was unworkable. At last, this work is accepted to appear in a special issue of Glossa under the title “Get rich quick: Why kids don’t need Occam’s Razor” and we have posted a draft to LingBuzz.

A callout post

A few years ago I wrote to an eminent phonologist noting a small omission in a publicly circulated manuscript of theirs marked “to appear in [major journal]”. They took the suggestion politely. In a follow-up, I asked when the manuscript would appear in [major journal]. They—the eminent phonologist, who I am keeping gender-neutral purely for anonymity—said they had no idea: the paper had been accepted to said major journal with minimal revisions years and years earlier, but they’d never bothered to send in the final version of the manuscript to the editors.

I realize it’s easy to let work, even good work, gather a bit of dusk. I’m guilty of this myself at times and I imagine it happens even more when one is eminent. But what this essay supposes is that it is bad for work, particularly by eminent linguists, to sit in public view without peer review, for years or even indefinitely.

Let me provide an example. I am working on a revision of a squib in which I provide a simple analysis of [phenomenon]. My motivation for interest in [phenomenon] is more or less that I saw a talk, a few years ago, presenting an alternative analysis for [phenomenon] using [bizarre theory]. While I am not ready to say that [bizarre theory] is “completely mad” (as one of my less-eminent colleagues has it), it was promulgated in a manuscript by an eminent phonologist (a different one this time) and that eminent phonologist’s eminent former student. That manuscript has, by now, circulated for a decade without any sort of peer review, but it has racked up hundreds of citations, and while certainly interesting, it raises many, many more questions about [bizarre theory] than it answers. This is a roundabout way to say, then, that [bizarre theory] has at its foundation a paper that would not make it through double-blind peer review in its current sketchy form. That’s not to say that a full explication of [bizarre theory] wouldn’t make it into print, but I suspect a peer-reviewed version would be much more valuable for the field than the decade-old manuscript we actually have.

So to be clear: I think these eminent linguists should just polish up these manuscripts and send them off for peer review. I for one have never had a paper which wasn’t substantially improved by peer review. And in particular, I think it’s borderline unethical for these eminent linguists to treat unpublished manuscripts as good enough for their graduate students to base a dissertation on, for instance, if they’re unwilling to even debate the work with their peers.

The lexical/postlexical distinction in Logical Phonology

According to an old idea developed most carefully by Kiparsky (1982) in the framework of Lexical Phonology there is a fundamental distinction between lexical and postlexical phonological computation, with the former necessarily applying before the latter. The following distinctions are proposed:

(1) Lexical processes must (or may) be cyclic, reapplying after every word-formation process; postlexical processes cannot be.
(2) Lexical processes are usually feature-filling (or structure-building), and must be so when applying in non-derived environments (i.e., on the first cycle); postlexical processes may be either feature-filling or feature-changing (or structure-changing).
(3) Lexical processes may have morphemic/lexical exceptions; postlexical processes never do.

I suggest the empirical effects of (2-3) follow more or less directly from the assumptions of Logical Phonology (see especially Gorman & Reiss in press a). This is perhaps not surprising—Logical Phonology is influenced by certain strands of Lexical Phonology. In Lexical Phonology, the lexical/postlexical distinction, its connection to cyclicity as in (1), and its connection to exceptionality as in (3) are all axiomatic (i.e., stipulated). Logical Phonology, in contrast, does not recognize the lexical/postlexical distinction, and it similarly treats the feature-filling/feature-changing distinction (and the related non-derived environment blocking; see Gorman and Reiss in press b and Reiss 2025), as in (2), as derived rather than axiomatic. Yet, Logical Phonology is largely capable of deriving the empirical effects of (2-3). The relevant assumptions, justified in various places in the Logical Phonology canon, are as follows:

(4) Underspecification: Underspecification is permitted.
(5) Epistemic boundedness: Elements which appear to be identical on the surface but behave differently with respect to the morphophonology (i.e., which constitute putative exceptions) are underlyingly distinct. Where appropriate, underspecification is deployed to encode underlying distinctions so posited.1
(6) Specificity: Suppose that segment /G/ is more richly specified than underspecified /F/, but they agree on all segments for which they are both specified (i.e., /F/ ⊂ /G/). Then, it is impossible for a phonological rule to intensionally target (or be triggered by) /F/ without also being targeted (or triggered by, resp.) /G/.
(7) Theory of possible rules: Intrasegmental phonological processes derive from either unification or subtraction rules.2

Let’s get (1) out of the way first. It is not clear whether this claim can be maintained: various linguists, starting with Booij and Rubach (1987), claim there are non-cyclic rules which show other symptoms of being lexical rules, including properties (2-3). Such rules have been called postcyclic. A weaker version of this, then, is simply the claim that all cyclic rules precede all non-cyclic rules, a claim which no longer ties cyclicity to any notions specific to Lexical Phonology. Logical Phonology does not have much to say about this. Logical Phonology is fully compatible with cyclicity—indeed, cyclicity is used to excellent effect in some unpublished work by Daniar Kasenov and Charles Reiss—but has little new to say about the notion.

Logical Phonology has much more to say about (2-3). Logical Phonology proposes that feature-changing (i.e., structure-changing) processes reflect subtraction rules feeding unification rules; there are no feature-changing rules per se. Thus, for a unification rule R to mutate (i.e., non-vacuously target) some segment s, it must be the case that either:

(8) Conditions on non-vacuous unification:
a. is underlyingly underspecified with respect to one or more features specifications in the change (right-hand-side) portion of R, or
b. s is made to be underspecified w.r.t. those feature specifications via a subtraction rule earlier in the derivation.

Because Logical Phonology permits one to mix subtraction and unification rules more or less freely, derivations are not necessarily monotonic in the sense of only adding feature specifications. However, one can discern a weaker form of monotonicity (which I’ll tentatively call monotonicity of “exceptionality”) in these derivations. With respect to features that are underlyingly underspecified, as in (8a), the only thing a Logical Phonology derivation can “do” to those underspecified “slots,” speaking informally, is fill them in. As a corollary of (6), no rule can refer to the absence of features on these segments. Once such segments are no longer underspecified (due to unification), there is no way for subsequent rules to refer to their underlyingly underspecified status. Consequently, the ability for rules to interact with the underlying underspecification of segments decreases monotonically as the derivation progresses.

Logical Phonology grammars make extensive use of underlying underspecification, and the fact that a segment was underlyingly underspecified becomes less “useful” (again speaking informally) as the derivation progresses. This alone seems sufficient to predict a weak tendency for earlier rules in the derivation to be feature-filling unification, whereas later rules mix subtraction and unification to derive feature-changing processes, as in (2). And, it also suffices to predict that derivationally-earlier processes are more likely to show the effects of “exceptionality”, simply because such putative exceptionality is often encoded by underlying underspecification that becomes increasingly difficult for rules to refer to, as in (3).

Above, I have stated this as a tendency, but I suspect there may be some unanticipated “escape hatches” from these predictions, via some complex series of rules I have not yet anticipated, on analogy to the wandering targets proscribed in Gorman & Reiss (2025). If that is the case, I’d plead much as we do in that paper: the rules needed to implement these escape hatches may be highly unlikely to make it through the diachronic filter, or may be structurally excluded by non-trivial protocols of the language acquisition device.

Endnotes

  1. This is surely too strong, but I’m making a point here.
  2. A third type of rule, segment rules, are formalized in Gorman and Reiss in press b.

References

Booij, Geert and Rubach, Jerzy. 1987. Postcyclic versus postlexical rules in Lexical Phonology. Linguistic Inquiry 18: 1-44.
Gorman, Kyle and Reiss, Charles. 2025. How not to acquire exchange rules in Logical Phonology. In Proceedings of the 2025 annual conference of the Canadian Linguistic Association.
Gorman, Kyle and Reiss, Charles. In press a. Metaphony in Substance-Free Logical PhonologyPhonology in press.
Gorman, Kyle and Reiss, Charles. In press b. Natural class reasoning in segment deletion rules. Paper presented at the 56th Annual Meeting of the North East Linguistic Society, to appear in the proceedings.
Kiparsky, Paul. 1982. Lexical Phonology and morphology. In I.-S. Yang (ed.), Linguistics in the Morning Calm, pages 3-91. Hanshin.
Reiss, Charles. 2025. Specificity and “non-derived environment blocking” in Logical Phonology. In Proceedings of the 2025 annual conference of the Canadian Linguistic Association.

You can just do things: walking to Philadelphia

So I walked to Philadelphia, from Brooklyn. It took three days in all. I was mostly curious whether it could be done, and it can.

I did it in three days. I knew from prior experience that I can log 20-25 miles of walking in a day without any major problems, so I figured I could do a bit more, could probably do it three days in a row, and could probably do it with a backpack of under 20 pounds.

On the first and longest day, I started early and walked from home up Flatbush Ave., turning at Tillary to catch the pedestrian path over the Brooklyn Bridge. I picked up a bagel and then a ferry to Belford, NJ. I had read about people taking the only pedestrian bridge to New Jersey—the George Washington Bridge out of Washington Heights—but getting that far north and then coming back down south through the Palisades, Newark and such adds at least another day to the process. The ferry terminal in Belford is a curious beast: it is in the middle of nowhere but it was full of young professionals in nice clothing heading to work. I walked for a few hours through a mix of “nowhere” and some nice-looking suburbs until I hit a diner, where I paused for lunch. I then continued west, which was mostly exurban, until I hit I-95 and the eastern edge of the borough of Cranbury, where there is a small cluster of motels. The last few hours were not easy on my morale: rain, which wasn’t really in the forecast, started and then became more intense until there was even some distant thunder. I packed rain gear, but it wasn’t easy on my morale, and there were a few places where I was walking along the shoulder of a relatively busy road. I checked into the motel, showered, consumed water and electrolytes to relieve my spasming calves, and ate a hotel lobby Cobb salad, then went to bed. Day 1: 30 miles.

The next day, I stretched (and took a dip in the motel pool), surrounded the worst blister with moleskin, and headed out. I stopped in the very cute town of Cranbury for brunch at a diner. From there, it was a few miles of office parks, some farmland, and then nice suburbs where I had some shade. Eventually, I got to the outskirts of Hamilton, an urban area near Trenton, and then walked through the south side of Trenton itself, crossing over into Pennsylvania on the bridge with the “TRENTON MAKES THE WORLD TAKES” sign. At that point, my legs were pretty tired, but I still had a long way to go and was running out of daylight, having started later than planned. The next stretch was almost all on the Delaware & Lehigh towpath, which still exists even though the canal has been covered over with various causeways for roads and highways. Parts held water, others were just muddy banks, but the whole thing was a pretty nice path, mostly forested. At some point, there was no more light (I’d started out later than planned), but the path was clear and nobody else was around. Eventually, this dead-ended into the Bristol Pike, where I stopped at the second motel. I ended up just eating some snacks and water before bed. Day 2: 27 miles.

I started off with a healthy breakfast (they exist) at Wawa, and then continued down the Pike until I hit Philadelphia city limits. Philadelphia managed to annex its northeastern suburbs in the mid-19th century and so it stretches many miles to the northeast of Center City, and the Pike becomes Frankford Ave. This part was the only portion that had any hills to speak of. After another Wawa meal (this one less healthy) I continued into Kensington and Port Richmond, as the temperature and humidity rose. These neighborhoods have never been wealthy to my knowledge, and they are undeclared drug amnesty zones, and there was a really sad amount of human suffering on display; it’s hard to look at. At this point, I was really in a lot of pain between my tight hips, blisters, and my arches, and I needed drinking water to cope with the heat, but there was simply nowhere I felt comfortable stopping for several miles, until I finally hit the edge of Fishtown. Since I was still on schedule, I stopped to rest for about an hour at Atlantis, The Lost Bar, probably the northeastern-most bar I used to go to when I lived in Philly, and then from there, continued down Frankford Ave. until Girard.  From there, I zigzagged a bit to Chinatown. There, I got the shaved noodles at Nan Zhou (highly recommended), took a photo at City Hall, and then finally arrived at 30th St. Station, where I caught the Keystone Line train back to New York. Day 3: 23 miles.

Pronouncing Mamdani

Throughout the primary and general election for the New York City mayor, Andrew Cuomo, among others, struggled with pronouncing the last name of the ultimate winner, Zohran Mamdani, with Cuomo repeatedly rendering it as what sounds like [mɑndɑni], with an unexpected [n] in the coda of the first syllable. While this error quite possibly reflects Cuomo’s apparent disinterest in other people, there is an obvious phonological basis for it. In English, there is a process by which coda nasals take on the place of a following obstruent. This can be seen (e.g., Gorman 2013:75f.) in a few potential alternations: e.g., many theorists derive English [ŋ] from underlying /ng/, and the Latinate negative prefix in- as in i[n.d]ecent has an allomorph im– as in i[m.b]alance. It is also overwhelmingly true of monomorphemic words, with words like pi[m.p]le, sta[n.z]a, or mo[ŋ.k]ey. Of course there are a few exceptions, like pli[m.s]oll and scri[m.ʃ]aw, but as I show in my dissertation, they are quite rare in my sample of 6,619  English monomorphemic words. There are just two examples of [m.d] that CELEX considers monomorphemic: du[m.d]um and hu[m.d]rum. Of course, CELEX is wrong on both counts: both are reduplicative and the [m.d] cluster occurs at the boundary between base and reduplicant.

All of this is a long way to say that it’s likely that Cuomo and others are phonotactically “adapting” Mamdani’s name to the native English pattern. Of course, we don’t normally do that with “non-Anglo” names in English; we tend to render them as faithfully so long as they consist of segments present in the native inventory, modulo unusually complex consonant clusters.

References

Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.

Natural class reasoning in segment deletion rules

I posted our handout from our NELS talk yesterday here. We illustrate two points: a corrolary of Logical Phonology (LP) called delete the rich, pertaining to segment deletion rules, and how LP handles apparent cases of non-derived environment blocking. In doing so, we give a relatively detailed phonology of Hungarian h and also address the famous case of Turkish velar deletion.

I’ll post the MS for the proceedings to LingBuzz once it’s ready.

Metaphony in Logical Phonology

My paper with Charles Reiss on metaphony in Logical Phonology is now accepted to appear in a special issue of Phonology. As it happens, it includes problems I originally posed here on this blog (1 2 3). I have also updated the version on LingBuzz to include various changes recommended by the reviewers and editors.