A sequential dual method for large scale multi-class linear svms

  • Authors:
  • S. Sathiya Keerthi;S. Sundararajan;Kai-Wei Chang;Cho-Jui Hsieh;Chih-Jen Lin

  • Affiliations:
  • Yahoo! Research, Santa Clara, CA, USA;Yahoo! Labs, Bangalore, India;National Taiwan University, Taipei, Taiwan, Roc;National Taiwan University, Taipei, Taiwan, Roc;National Taiwan University, Taipei, Taiwan, Roc

  • Venue:
  • Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Efficient training of direct multi-class formulations of linear Support Vector Machines is very useful in applications such as text classification with a huge number examples as well as features. This paper presents a fast dual method for this training. The main idea is to sequentially traverse through the training set and optimize the dual variables associated with one example at a time. The speed of training is enhanced further by shrinking and cooling heuristics. Experiments indicate that our method is much faster than state of the art solvers such as bundle, cutting plane and exponentiated gradient methods.