Using a fixed training-development-test split in sklearn

The scikit-learn machine learning library has good support for various forms of model selection and hyperparameter tuning. For setting regularization hyperparameters, there are model-specific cross-validation tools, and there are also tools for both grid (e.g., exhaustive) hyperparameter tuning with the sklearn.model_selection.GridSearchCV and random hyperparameter tuning (in the sense of Bergstra & Bengio 2012) with sklearn.model_selection.RandomizedSearchCV, respectively. While you could probably could implement these yourself, the sklearn developers have enabled just about every feature you could want, including multiprocessing support.

One apparent limitation of these classes is that, as their names suggest, they are designed for use in a cross-validation setting. In the speech & language technology, however, standard practice is to use a fixed partition of the data into training, development (i.e., validation), and test (i.e., evaluation) sets, and to select hyperparameters which maximize performance on the development set. This is in part an artifact of limited computing resources of the Penn Treebank era and I’ve long suspected it has serious repercussions for model evaluation. But tuning and evaluating with a standard split is faster than cross-validation and can make exact replication much easier. And, there are also some concerns about whether cross-validation is the best way to set hyperparameters anyways. So what can we do?

The GridSearchCV and RandomSearchCV classes take an optional cv keyword argument, which can be, among other things, an object implementing the cross-validation iterator interface. At first I thought I would create an object which allowed me to use a fixed development set for hyperparameter tuning, but then I realized that I could do this with one of the existing iterator classes, namely one called sklearn.model_selection.PredefinedSplit. The constructor for this class takes a single argument test_fold, an array of integers of the same size as the data passed to the fitting method.  As the documentation explains “…when using a validation set, set the test_fold to 0 for all samples that are part of the validation set, and to -1 for all other samples.” That we can do. Suppose that we have training data x_train and y_train and development data x_dev and y_dev laid out as NumPy arrays. We then create a training-and-development set like so:

x = numpy.concatenate([x_train, x_dev])
y = numpy.concatenate([y_train, y_dev])

Then, we create the iterator object:

test_fold = numpy.concatenate([
    # The training data.
    numpy.full(-1, x_train.shape[1], dtype=numpy.int8),
    # The development data.
    numpy.zeros(x_dev.shape[1], dtype=numpy.int8)
])
cv = sklearn.model_selection.PredefinedSplit(test_fold)

Finally, we provide cv as a keyword argument to the grid or random search constructor, and then train. For instance, similar to this example we might do something like:

base = sklearn.ensemble.RandomForestClassifier()
grid = {"bootstrap": [True, False], 
        "max_features": [1, 3, 5, 7, 9, 10]}
model = sklearn.model_select.GridSearchCV(base, grid, cv=cv)
model.fit(x, y)

Now just add n_jobs=-1 to the constructor for model and to spread the work across all your logical cores.

References

Bergstra, J., and Bengio, Y. 2012. Random search for hyperparameter optimization. Journal of Machine Learning Research 13: 281-305.

Leave a Reply

Your email address will not be published. Required fields are marked *