Proximal boosting and variants - Université de Paris - Faculté des Sciences Access content directly
Preprints, Working Papers, ... Year :

Proximal boosting and variants

Abstract

Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm, when the empirical risk to minimize is not differentiable, in order to introduce a novel boosting approach, called proximal boosting. Besides being motivated by non-differentiable optimization, the proposed algorithm benefits from algorithmic improvements such as controlling the approximation error and Nesterov's acceleration, in the same way as gradient boosting [Grubb and Bagnell, 2011, Biau et al., 2018]. This leads to two variants, respectively called residual proximal boosting and accelerated proximal boosting. Theoretical convergence is proved for the first two procedures under different hypotheses on the empirical risk and advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Fichier principal
Vignette du fichier
paper.pdf (1.05 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01853244 , version 1 (02-08-2018)
hal-01853244 , version 2 (22-01-2020)
hal-01853244 , version 3 (27-07-2021)
hal-01853244 , version 4 (29-11-2022)

Identifiers

  • HAL Id : hal-01853244 , version 3

Cite

Erwan Fouillen, Claire Boyer, Maxime Sangnier. Proximal boosting and variants. 2021. ⟨hal-01853244v3⟩
328 View
295 Download

Share

Gmail Facebook Twitter LinkedIn More