Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging

  • Authors:
  • A. B. Juditsky;A. V. Nazin;A. B. Tsybakov;N. Vayatis

  • Affiliations:
  • Laboratoire de Modelisation et Calcul, Universite Grenoble I, France;Institute of Control Sciences, RAS, Moscow, Russia;Laboratoire de Probabilites et Modeles Aleatoires, Universite Paris VI, France;Laboratoire de Probabilites et Modeles Aleatoires, Universite Paris VI, France

  • Venue:
  • Problems of Information Transmission
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the 驴1-constraint. It is defined by a stochastic version of the mirror descent algorithm which performs descent of the gradient type in the dual space with an additional averaging. The main result of the paper is an upper bound for the expected accuracy of the proposed estimator. This bound is of the order $$C\sqrt {(\log M)/t}$$ with an explicit and small constant factor C, where M is the dimension of the problem and t stands for the sample size. A similar bound is proved for a more general setting, which covers, in particular, the regression model with squared loss.