{"id":1912,"date":"2024-02-27T09:49:09","date_gmt":"2024-02-27T14:49:09","guid":{"rendered":"https:\/\/www.wellformedness.com\/blog\/?p=1912"},"modified":"2024-02-27T09:49:09","modified_gmt":"2024-02-27T14:49:09","slug":"kill-yr-darlings","status":"publish","type":"post","link":"https:\/\/www.wellformedness.com\/blog\/kill-yr-darlings\/","title":{"rendered":"Kill yr darlings\u2026"},"content":{"rendered":"&#8230;or at least make them more rigorous.\r\n\r\nIn the field of computational phonology, there were three mid-pandemic articles that presented elaborate computational \u201ctheories of everything\u201d in phonology: Ellis et al. (2022), Rasin et al. (2021), and Yang &amp; Piantadosi (2022).<sup>1<\/sup> I am quite critical of all three offerings. All three provide computational models evaluated for their ability to acquire phonological patterns\u2014with varying amounts overheated rhetoric about what this means for generative grammar\u2014and in each case, there is a utter lack of rigor. None of the papers prove, or even conjecture, anything hopeful or promising about the computational complexity of the proposed models, how long they take to converge (or if they do), or whether there is any bound on the kinds of mistakes the models might make once they converge. What they do instead is demonstrate that the models produce satisfactory results on toy problem sets. One might speculate that these three papers are the result of lockdown-era hyperfocus on thorny passion projects. But I think it\u2019s unfortunate that the authors (and doubly so the reviewers and editors) considered these projects complete before providing formal characterization of the proposed models\u2019 substantive properties.<sup>2<\/sup> By stating this critique here, I hopefully commit myself to align actions with my values in my future work, and I challenge the aforementioned authors to study these properties.\r\n<h1>Endnotes<\/h1>\r\n<ol>\r\n \t<li>To be fair, Yang and Piantadosi claims to be a theory of not just phonology&#8230;<\/li>\r\n \t<li>I am permitted to state that I reviewed one of these papers\u2014my review was \u201csigned\u201d and made public, along with the paper\u2014and my review was politely negative. However, it was clear to me that the editor and other reviewers had a very high opinion of this work and there was no reason for me to fight the inevitable.<\/li>\r\n<\/ol>\r\n<h1>References<\/h1>\r\n<span dir=\"ltr\" role=\"presentation\">Ellis, K., Albright, A., Solar-Lezama, A., Tenenbaum, J. B., and O&#8217;Donnell, T. J.<\/span><span dir=\"ltr\" role=\"presentation\"> 2022. Synthesizing theories of human language with bayesian program induc<\/span><span dir=\"ltr\" role=\"presentation\">tion.<\/span> <em><span dir=\"ltr\" role=\"presentation\">Nature Communications<\/span><\/em> <span dir=\"ltr\" role=\"presentation\">2022:1\u201313.\r\nRasin, E., Berger, I., Lan, N., Shefi, I. and Katzir, R. 2021. Approaching explanatory adequacy in phonology using Minimum Description Length. <em>Journal of Language Modelling<\/em> 9:17\u201366.\r\n<\/span><span dir=\"ltr\" role=\"presentation\">Yang, Y. and Piantadosi, S. T. 2022. One model for the learning of language.<\/span> <em><span dir=\"ltr\" role=\"presentation\">Proceed<\/span><span dir=\"ltr\" role=\"presentation\">ings of the National Academy of Sciences<\/span><\/em> <span dir=\"ltr\" role=\"presentation\">119:e2021865119.<\/span>\r\n\r\n<!-- \/wp:post-content -->","protected":false},"excerpt":{"rendered":"<p>&#8230;or at least make them more rigorous. In the field of computational phonology, there were three mid-pandemic articles that presented elaborate computational \u201ctheories of everything\u201d in phonology: Ellis et al. (2022), Rasin et al. (2021), and Yang &amp; Piantadosi (2022).1 I am quite critical of all three offerings. All three provide computational models evaluated for &hellip; <a href=\"https:\/\/www.wellformedness.com\/blog\/kill-yr-darlings\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Kill yr darlings\u2026&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","footnotes":""},"categories":[4],"tags":[],"class_list":["post-1912","post","type-post","status-publish","format-standard","hentry","category-language"],"_links":{"self":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts\/1912","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/comments?post=1912"}],"version-history":[{"count":5,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts\/1912\/revisions"}],"predecessor-version":[{"id":1928,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/posts\/1912\/revisions\/1928"}],"wp:attachment":[{"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/media?parent=1912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/categories?post=1912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.wellformedness.com\/blog\/wp-json\/wp\/v2\/tags?post=1912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}