Linear classifier combination and selection using group sparse regularization and hinge loss

  • Authors:
  • Mehmet Umut ŞEn;Hakan Erdogan

  • Affiliations:
  • Vision and Pattern Analysis Laboratory, Sabanci University, Faculty of Engineering and Natural Sciences, Istanbul, Turkey;Vision and Pattern Analysis Laboratory, Sabanci University, Faculty of Engineering and Natural Sciences, Istanbul, Turkey

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2013

Quantified Score

Hi-index 0.10

Visualization

Abstract

The main principle of stacked generalization is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, after presenting a short survey of the literature on stacked generalization, we propose to use regularized empirical risk minimization (RERM) as a framework for learning the weights of the combiner which generalizes earlier proposals and enables improved learning methods. Our main contribution is using group sparsity for regularization to facilitate classifier selection. In addition, we propose and analyze using the hinge loss instead of the conventional least squares loss. We performed experiments on three different ensemble setups with differing diversities on 13 real-world datasets of various applications. Results show the power of group sparse regularization over the conventional l"1 norm regularization. We are able to reduce the number of selected classifiers of the diverse ensemble without sacrificing accuracy. With the non-diverse ensembles, we even gain accuracy on average by using group sparse regularization. In addition, we show that the hinge loss outperforms the least squares loss which was used in previous studies of stacked generalization.