Kill yr darlings…

…or at least make them more rigorous.

In the field of computational phonology, there were three mid-pandemic articles that presented elaborate computational “theories of everything” in phonology: Ellis et al. (2022), Rasin et al. (2021), and Yang & Piantadosi (2022).1 I am quite critical of all three offerings. All three provide computational models evaluated for their ability to acquire phonological patterns—with varying amounts overheated rhetoric about what this means for generative grammar—and in each case, there is a utter lack of rigor. None of the papers prove, or even conjecture, anything hopeful or promising about the computational complexity of the proposed models, how long they take to converge (or if they do), or whether there is any bound on the kinds of mistakes the models might make once they converge. What they do instead is demonstrate that the models produce satisfactory results on toy problem sets. One might speculate that these three papers are the result of lockdown-era hyperfocus on thorny passion projects. But I think it’s unfortunate that the authors (and doubly so the reviewers and editors) considered these projects complete before providing formal characterization of the proposed models’ substantive properties.2 By stating this critique here, I hopefully commit myself to align actions with my values in my future work, and I challenge the aforementioned authors to study these properties.

Endnotes

  1. To be fair, Yang and Piantadosi claims to be a theory of not just phonology…
  2. I am permitted to state that I reviewed one of these papers—my review was “signed” and made public, along with the paper—and my review was politely negative. However, it was clear to me that the editor and other reviewers had a very high opinion of this work and there was no reason for me to fight the inevitable.

References

Ellis, K., Albright, A., Solar-Lezama, A., Tenenbaum, J. B., and O’Donnell, T. J. 2022. Synthesizing theories of human language with bayesian program induction. Nature Communications 2022:1–13.
Rasin, E., Berger, I., Lan, N., Shefi, I. and Katzir, R. 2021. Approaching explanatory adequacy in phonology using Minimum Description Length. Journal of Language Modelling 9:17–66.
Yang, Y. and Piantadosi, S. T. 2022. One model for the learning of language. Proceedings of the National Academy of Sciences 119:e2021865119.

Leave a Reply

Your email address will not be published. Required fields are marked *