Greedy Regularized Least-Squares for Multi-task Learning

  • Authors:
  • Pekka Naula;Tapio Pahikkala;Antti Airola;Tapio Salakoski

  • Affiliations:
  • -;-;-;-

  • Venue:
  • ICDMW '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining Workshops
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-task feature selection refers to the problem of selecting a common predictive set of features over multiple related learning tasks. The problem is encountered for example in applications, where one can afford only a limited set of feature extractors for solving several tasks. In this work, we present a regularized least-squares (RLS) based algorithm for multi-task greedy forward feature selection. The method selects features jointly for all the tasks by using leave-one-out cross-validation error averaged over the tasks as the selection criterion. While a straightforward implementation of the approach by combining a wrapper algorithm with a black-box RLS training method would have impractical computational costs, we achieve linear time complexity for the training algorithm through the use of matrix algebra based computational shortcuts. In our experiments on insurance and speech classification data sets the proposed method shows a better prediction performance than baseline methods that select the same number of features independently.