SVM+ regression and multi-task learning

  • Authors:
  • Feng Cai;Vladimir Cherkassky

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN;Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik [9] proposed general approach to formalizing such problems, known as Learning With Structured Data (LWSD) and its SVM-based optimization formulation called SVM+. Liang and Cherkassky [5,6] showed empirical validation of SVM+ for classification, and its connections to Multi-Task Learning (MTL) approaches in machine learning. This paper builds upon this recent work [5,6,9] and describes a new methodology for regression problems, combining Vapnik's SVM+ regression [9] and the MTL classification setting [6], for regression problems. We also show empirical comparisons between standard SVM regression, SVM+, and proposed SVM+MTL regression method. Practical implementation of new learning technologies, such as SVM+, is often hindered by their complexity, i.e, large number of tuning parameters (vs standard inductive SVM regression). To this end, we provide a practical scheme for model selection that combines analytic selection of parameters for SVM regression [3] and resampling-based methods for selecting model parameters specific to SVM+ and SVM+MTL.