Distribution-Free, Risk-Controlling Prediction Sets

With Stephen Bates, University of California Berkeley

Distribution-Free, Risk-Controlling Prediction Sets

To enable valid statistical inference in prediction tasks, we show how to generate set-valued predictions for black-box predictors that control the expected loss on future test points at a user-specified level. Our approach provides explicit finite-sample guarantees for any distribution by using a holdout set to calibrate the size of the prediction sets, generalizing conformal prediction to control more complex notions of error such as the false rejection rate. We demonstrate our procedure in five large-scale problems: (1) classification problems where some mistakes are more costly than others; (2) multi-label classification, where each observation has multiple associated labels; (3) classification problems where the labels have a hierarchical structure; (4) image segmentation, where we wish to predict a set of pixels containing an object of interest; and (5) protein structure prediction.

Add to your calendar or Include in your list