Some greedy learning algorithms for sparse regression and classification with mercer kernels

  • Authors:
  • Prasanth B. Nair;Arindam Choudhury;Andy J. Keane

  • Affiliations:
  • Computational Engineering and Design Center, School of Engineering Sciences, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom;Computational Engineering and Design Center, School of Engineering Sciences, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom;Computational Engineering and Design Center, School of Engineering Sciences, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom

  • Venue:
  • The Journal of Machine Learning Research
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present greedy learning algorithms for building sparse nonlinear regression and classification models from observational data using Mercer kernels. Our objective is to develop efficient numerical schemes for reducing the training and runtime complexities of kernel-based algorithms applied to large datasets. In the spirit of Natarajan's greedy algorithm (Natarajan, 1995), we iteratively minimize the L2 loss function subject to a specified constraint on the degree of sparsity required of the final model or till a specified stopping criterion is reached. We discuss various greedy criteria for basis selection and numerical schemes for improving the robustness and computational efficiency. Subsequently, algorithms based on residual minimization and thin QR factorization are presented for constructing sparse regression and classification models. During the course of the incremental model construction, the algorithms are terminated using model selection principles such as the minimum descriptive length (MDL) and Akaike's information criterion (AIC). Finally, experimental results on benchmark data are presented to demonstrate the competitiveness of the algorithms developed in this paper.