LOESS hyperparameters without tears

LOESS is a classic non-parametric regression technique. One potential issue that arises is that LOESS fits depend on several hyperparameters (i.e., parameters set by the experimenter a priori). In this post I’ll take a quick look at how to set these.

At each point in a LOESS curve, the y-value is derived from a local, low-degree polynomial weighted regression. The first hyperparameter refers to the degree of the local fits. Most users set degree to 2 (i.e., use local quadratic curves), and with good reason. At degree 1, you’re just computing a local average. Higher degrees than 2 (e.g., cubic) tend to not have much of an effect.

The other hyperparameter is “span”, which controls the degree of smoothing. A value of 0 uses no context and a value of 1 uses the entire sample (so it will be similar to fitting a single quadratic function to the data). The choice of this value has a major effect on the quality of the fit obtained:

gbu

For the randomly generated data here, large values of the span parameter (“bad”) produce a LOESS which fails to follow the larger trend, whereas small values (“ugly”) primarily model noise. For this reason alone, the experimenter should probably not be permitted to select the span hyperparameter herself.

Fortunately, there are several objectives used to determine an “optimal” setting for the span parameter. Hurvich et al. (1998) propose a particularly privileged objective, namely minimizing AICC. This has been used to generate the “good” curve above. Here’s how I did it (adapted from this post to R-help):

There is also an R package fANCOVA which apparently includes a function loess.as which automatically determines the span parameter, presumably similar to how I’ve done it here. I haven’t tried it.

PS to those inclined to care: the origins of memetic, snarky, academic “X without tears” is, to my knowledge, J. Eric S. Thompson‘s 1972 book Maya Hieroglyphs Without Tears. While I have every reason to believe Thompson was poking fun at his detractors, it’s interesting to note that he turned out to be fabulously wrong about the nature of the hieroglyphs.

On the Providence word gap intervention

A recent piece in the Boston Globe quoted my take on a grant to Providence, RI for a “word-gap” intervention. In this quote, I expressed some skepticism about the grant’s goals, but omitted the part of the email where I explained why I felt that way. Readers of the piece might have gotten the impression that I had a less, uhm, nuanced take on the Providence grant than I do. So, here is a summary of my full email to Ben from which the quote was taken.

An ambitious proposal

First off, the Providence/LENA team should be congratulated on this successful grant application: I’m glad they got it and not something more “Bloombergian” (like, say, an experimental proposal to ban free-pizzas-with-beer deals in the interest of bulging hipster waistlines). And they deserve respect for getting approved for such an ambitious proposal: the cash involved is an order of magnitude larger than the average applied linguistics grants. And, perhaps most of all, I have a great deal of respect for any linguist who can convince a group of non-experts that, not only is their work important, but that it is worth the opportunity cost. I also note that if materials from the Providence study are made publicly available (and they should be, in a suitably de-identified format, for the sake of the progress of the human race), my own research stands to benefit from this grant.

But there is another sense in which the proposal is ambitious, however: the success of this intervention depends on a long chain of inferences. If any one of these is wrong, the intervention is unlikely to succeed. Here are what I see as the major assumptions under which the intervention is being funded.

Assumption I: There exists a “word gap” in lower-income children

I was initially skeptical of this claim because it is so similar to a discredited assumption of 20th century educational theorists: the assumption that differences in school and standardized test performance were the result of the “linguistically impoverished” environment in which lower class (and especially minority) speakers grew up.

This strikes me as quite silly: no one who has even a tenuous acquintance with African-American communities could fail to note the importance of verbal skills in said community. Every African-American stereotype I can think of has one thing in common: an emphasis on verbal abilities. Here’s what Bill Labov, founder of sociolinguistics, had to say in his 1972 book, Language in the Inner City:

Black children from the ghetto area are said to receive little verbal stimulation, to hear very little well-formed language, and as a result are impoverished in their means of verbal expression…Unfortunately, these notions are based upon the work of educational psychologists who know very little about language and even less about black children. The concept of verbal deprivation has no basis in social reality. In fact, black children in the urban ghettos receive a great deal of verbal stimulation…and participate fully in a highly verbal culture. (p. 201)

I suspect that Labov may have dismissed the possibility of input deficits prematurely, just as I did. After all, it is an empirical hypothesis, and while Betty Hart and Todd Risley’s original study on differences in lexical input involved a small and perhaps-atypical sample, but the correlation between socioeconomic status and lexical input has been many times replicated. So, there may be something to the “impoverishment theory” after all.

Assumption II: LENA can really estimate input frequency

Can we really count words using current speech technology? In a recent Language Log post, Mark Liberman speculated that counting words might be beyond the state of the art. While I have been unable to find much information on the researchers behind the grant or behind LENA, I don’t see any reason to doubt that the LENA Foundation has in fact built a useful state-of-the-art speech system that allows them to estimate input frequencies with great precision. One thing that gives me hope is that a technical report by LENA researchers provides estimates average input frequency in English which are quite close to an estimate computed by developmentalist Dan Swingley (in a peer-reviewed journal) using entirely different methods.

Assumption III: The “word gap” can be solved by intervention

For children who are identified as “at risk”, the Providence intervention offers the following:

Families participating in Providence Talks would receive these data during a monthly coaching visit along with targeted coaching and information on existing community resources like read-aloud programs at neighborhood libraries or special events at local children’s museums.

Will this have an long-term effect? I simply don’t know of any work looking into this (though please comment if you’re aware of something relevant), so this too is a strong assumption.

Given that there is now money in the budget for coaching, why are LENA devices necessary? Would it be better if any concerned parent could get coaching?

And, finally, do the caretakers of the most at-risk children really have time to give to this intervention? I believe the most obvious explanation of the correlation between verbal input and socioeconomic status is that caretakers on the lower end of the socioeconomic scale have less time to give to their children’s education: this follows from the observation that child care quality is a strong predictor of cognitive abilities. If this is the case, then simply offering counseling will do little to eliminate the word gap, since the families most at risk are the least able to take advantage of the intervention.

Assumption IV: The “word gap” has serious life consequences

Lexical input is clearly important for language development: it is, in some sense, the sole factor determining whether a typically developing child acquires English or Yawelmani. And, we know the devastating consequences of impoverished lexical input.

But here we are at risk of falling for the all-too-common fallacy which equates predictors of variance within clinical and subclinical populations. While massively impoverished language input gives rise to clinical language deficits, it does not follow that differences in language skills within typically developing children can be eliminated by leveling the language input playing field.

Word knowledge (as measured by verbal IQ, for instance) is correlated with many other measures of language attainment, but are increases in language skills enough to help an at-risk child to escape the ghetto (so to speak)?

This is the most ambitious assumption of the Providence intervention. Because there is such a strong correlation between lexical input and social class, it is very difficult to control for this while manipuating lexical input (and doing so would presumably be wildly unethical), we know very little on this subject. I hope that the Providence study will shed some light on this question.

So what’s wrong with more words?

This is exactly what my mom wanted to know when I sent her a link to the Globe piece. She wanted to emphasize that I only got the highest-quality word-frequency distributions all throughout my critical period! I support, tentatively, the Providence initiative and wish them the best of luck; if these assumptions all turn out to be true, the organizers and scientists behind the grant will be real heroes to me.

But, that leads me to the only negative effect this intervention could have: if closing the word gap does little to influence long-term educational outcomes, it will have made concerned parents unduly anxious about the environment they provide for their children. And that just ain’t right.

(Disclaimer: I work for OHSU, where I’m supported by grants, but these are my professional and personal opinions, not those of my employer or funding agencies. That should be obvious, but you never know.)

Making high-quality graphics in R

There are a lot of different ways to make an R graph for TeX; this is my workflow.

In R

I use cairo_pdf to write a graph to disk. This command takes arguments for image size and for font size and face. If you’re on a Mac, you will need to install X11.

Image size

I always specify graph size by hand, in inches. For manuscripts and handouts, I usually set the width to be the printable width. If you’re using 1″ margins, that’s 6.5″. Then, I adjust height until a pleasing form emerges.

Fonts

I match the font face of the manuscript (whatever I’m using) and graph labels by passing the font name as the argument to family. This matters most if you’re writing a handout, and matters less if you’re sending it to, say, Oxford University Press, who will redo your text anyways. I found out the hard way that the family keyword argument is absent in older versions of R, so you may need to upgrade. By default, image font are 12pt. This is generally fine, but can be adjusted with the pointsize argument.

Graphing

This is a no-brainer: use ggplot2.

All together now

cairo_pdf('mygraph.pdf', width=6.5, height=4, family='Times New Roman')
qplot(X, Y, data=dat)
dev.off()

In TeX

Add usepackage{graphicx} to your preamble, if it’s not already there. In the body, includegraphics{mygraph.pdf}.

TeX tips for linguists

I’ve been using TeX to write linguistics papers for nearly a decade now. I still think it’s the best option. Since TeX is a big, complex ecosystem and not at all designed with linguists in mind, I thought it might be helpful to describe the tools and workflow I use to produce my papers, handouts, and abstracts.

Michael Becker‘s notes are recommended as well.

Software

I use xetex (this is the same as xelatex) from XeTeX. It has two advantages over the traditional pdflatex and related tools. First, you can use system fonts via fontspec and mathspec. If you are using Computer Modern or packages like txfonts or times, etc., it’s time to join the modern world.

Secondly, it expects UTF-8. If you are using tipa, or multi-character sequences to enter non-ASCII characters, then you probably have ugly transcriptions. (Don’t want to name names…)

Fonts

Linguists generally demand the following types of characters:

  • Alphabetic small caps
  • The complete IPA, especially IPA [g] (which is not English “g”)
  • “European” extensions to ASCII: enye (año), diaresis (coöperation, über), acute (résumé), grave (à), macron (māl), circumflex (être), haček (očudit), ogonek (Pająk), eth (fracoð), thorn (þæt), eszet (Straße), cedilla (açai), dotted g (ealneġ), and so on, for Roman-like writing systems

The only font I’ve found that has all this is Linux Libertine. It has nothing to do with Linux, per se. In general, it’s pretty handsome, especially when printed small (though the Q is ridiculously large). If you can deal without small caps (and arguably, linguists use them too much), then a recent version of Times New Roman (IPA characters were added recently) also fits the bill. Unfortunately, if you’re on Linux and using the “MS Core Fonts”, that version of Times New Roman doesn’t have the IPA characters.

This is real important: do not allow your mathematical characters to be in Computer Modern if your paper is not in Computer Modern. It sticks out like a sore thumb. What you do is put something like this in the preamble:

\usepackage{mathspec}
\setmainfont[Mapping=tex-text]{Times New Roman}
\setmathfont(Digits,Greek,Latin){Times New Roman}

Examples

The gb4e package seems to be the best one for syntax-style examples, with morph-by-morph glossing and the like. I myself deal mostly in phonology, so I use the tabular environment wrapped with an example environment of my own creation called simplex.sty and packaged in my own LingTeX (which is a work-in-progress).

Bibliographies

I use natbib. When I have a choice, I usually reach for pwpl.bst, the bibliography style that we use for the Penn Working Papers in Linguistics. It’s loosely based on the Linguistic Inquiry style.

Compiling

I use make for compiling. This will be installed on most Linux computers. On Macintoshes, you can get it as part of the Developer Tools package, or with Xcode.

I type make to compile, make bib to refresh the bibliography, and make clean to remove all the temporary files. Here’s what a standard Makefile for paper.tex would look like for me.

 # commands
 PDFTEX=xelatex -halt-on-error
 BIBTEX=bibtex

 # files
 NAME=paper

 all: $(NAME).pdf

 $(NAME).pdf: $(NAME).tex $(NAME).bib *.pdf
      $(PDFTEX) $(NAME).tex
      $(BIBTEX) $(NAME).tex
      $(PDFTEX) -interaction=batchmode -no-pdf $(NAME).tex
      $(PDFTEX) -interaction=batchmode $(NAME).tex

 clean:
      latexmk -c

There are a couple interesting things here. -halt-on-error kills a compile the second it goes bad: why wouldn’t you want to fix the problem right when it’s detected, since it won’t produce a full PDF anyways? Both -interaction=batchmode and -no-pdf shave off a considerable amount of compile time, but aren’t practical when debugging, and when producing a final PDF, respectively. I use latexmk -c, which reads the log files and removes temporary files but preserves the target PDF. For some reason, though, it doesn’t remove .bbl files.

Draft mode

Up until you’re done, start your file like so:

\documentclass[draft,12pt]{article}

This will do two things: “overfull hboxes” will be marked with a black line, so you can rewrite and fix them. Secondly, images won’t be rendered into the PDF, which saves time. It’s very easy to tell who does this and who doesn’t.

On slides and posters

There are several TeX-based tools for making slides and posters. Beamer seems to be very popular for slides, but I find the default settings very ugly (gradients!) and cluttered (navigation buttons on every slide!). I use a very minimal Keynote style (Helvetica Neue, black on white). I’m becoming a bigger fan of handouts, since unlike slides or posters, the lengthy process of making a handout gets me so much closer to publication.