{"id":1835,"date":"2023-09-13T17:42:06","date_gmt":"2023-09-13T21:42:06","guid":{"rendered":"https:\/\/www.wellformedness.com\/blog\/?p=1835"},"modified":"2025-04-22T07:48:28","modified_gmt":"2025-04-22T11:48:28","slug":"different-functions-probabilty-probabilistic-grammar","status":"publish","type":"post","link":"https:\/\/www.wellformedness.com\/blog\/different-functions-probabilty-probabilistic-grammar\/","title":{"rendered":"The different functions of probabilty in probabilistic grammar"},"content":{"rendered":"<p>I have long been critical of na\u00efve interpretations of probabilistic grammar.\u00a0 To me, it seems like the major motivation for this approach derives from a na\u00efve\u2014I\u2019d say overly na\u00efve\u2014linking hypothesis mapping between acceptability judgments and grammaticality, as seen in Likert scale-style acceptability tasks. (See chapter 2 of <a href=\"https:\/\/www.wellformedness.com\/papers\/gorman-dissertation.pdf\">my dissertation<\/a> for a concrete argument against this.) But in this approach, the probabilities are measures of wellformedness.<\/p>\n<p>It occurs to me that there are a number of ontologically distinct interpretations of grammatical probabilities of the sort produced by &#8220;maxent&#8221;, i.e., logistic regression models.<\/p>\n<p>For instance, at <a href=\"https:\/\/m100.mit.edu\/\">M100<\/a> this weekend, I heard Bruce Hayes talk about another use of maximum entropy models: scansion. In poetic meters, there is variation in, say, whether the caesura is\u00a0<em>masculine\u00a0<\/em>(after a stressed syllable) or\u00a0<em>feminine\u00a0<\/em>(after an unstressed syllable), and the probabilities reflect that.<sup>1<\/sup> However, I don&#8217;t think it makes sense to equate this with grammaticality, since we are talking about variation in highly self-conscious linguistic artifacts here and there is no reason to think one style of caesura is more grammatical than the other.<sup>2<\/sup><\/p>\n<p>And of course there is a third interpretation, in which the probabilities are <em>production probabilities<\/em>, representing actual variation in production, within a speaker or across multiple speakers.<\/p>\n<p>It is not obvious to me that these facts all ought to be modeled the same way, yet the maxent community seems comfortable assuming a single cognitive model to cover all three scenarios. To state the obvious, it makes no sense for a cognitive model to account for interspeaker variation because there is no such thing as &#8220;interspeaker cognition&#8221;, there are just individual mental grammars.<\/p>\n<h1>Endnotes<\/h1>\n<ol>\n<li>This is a fabricated example because Hayes and colleagues mostly study English meter\u2014something I know nothing about\u2014whereas I&#8217;m interested in Latin poetry. I imagine English poetry has caesurae too but I&#8217;ve given it no thought yet.<\/li>\n<li>I am not trying to say that we can&#8217;t study grammar with poetry. Separately, I\u00a0note, as did, I think, Paul Kiparsky at the talk, that this model also assumes that the input text the poet is trying to fit to the meter has no role to play in constraining what happens.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>I have long been critical of na\u00efve interpretations of probabilistic grammar.\u00a0 To me, it seems like the major motivation for this approach derives from a na\u00efve\u2014I\u2019d say overly na\u00efve\u2014linking hypothesis mapping between acceptability judgments and grammaticality, as seen in Likert scale-style acceptability tasks. (See chapter 2 of my dissertation for a concrete argument against this.) &hellip; <a href=\"https:\/\/www.wellformedness.com\/blog\/different-functions-probabilty-probabilistic-grammar\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;The different functions of probabilty in probabilistic grammar&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","footnotes":""},"categories":[11,4,9],"tags":[],"class_list":["post-1835","post","type-post","status-publish","format-standard","hentry","category-acquisition","category-language","category-sociolinguistics"],"_links":{"self":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts\/1835","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/comments?post=1835"}],"version-history":[{"count":5,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts\/1835\/revisions"}],"predecessor-version":[{"id":2341,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts\/1835\/revisions\/2341"}],"wp:attachment":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/media?parent=1835"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/categories?post=1835"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/tags?post=1835"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}