Towards a better understanding of early stopping for boosting algorithms

With Yuting Wei, Stanford University

Towards a better understanding of early stopping for boosting algorithms

In this talk, I will discuss the behaviour of boosting algorithm for non-parametric regression. While non-parametric models offer great flexibility, they can lead to overfitting and thus poor generalisation performance. For this reason, procedures for fitting these models must involve some form of regularisation. Although early-stopping of iterative algorithms is a widely-used form of regularisation in statistics and optimisation, it is less well-understood than its analogue based on penalised regularisation. We exhibit a direct connection between a stopped iterate and the localised Gaussian complexity of the associated function class which allows us to derive explicit and optimal stopping rules. We will discuss such stopping rules in detail for various reproducing kernel Hilbert spaces, and also extend these insights to broader classes of functions.

Add to your calendar or Include in your list