Does SVM really scale up to large bag of words feature spaces?

  • Authors:
  • Fabrice Colas;Pavel Paclík;Joost N. Kok;Pavel Brazdil

  • Affiliations:
  • LIACS, Leiden University, The Netherlands;ICT Group, Delft University of Technology, The Netherlands;LIACS, Leiden University, The Netherlands;LIACC, NIAAD, University of Porto, Portugal

  • Venue:
  • IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We are concerned with the problem of learning classification rules in text categorization where many authors presented Support Vector Machines (SVM) as leading classification method. Number of studies, however, repeatedly pointed out that in some situations SVM is outperformed by simpler methods such as naive Bayes or nearest-neighbor rule. In this paper, we aim at developing better understanding of SVM behaviour in typical text categorization problems represented by sparse bag of words feature spaces. We study in details the performance and the number of support vectors when varying the training set size, the number of features and, unlike existing studies, also SVM free parameter C, which is the Lagrange multipliers upper bound in SVM dual. We show that SVM solutions with small C are high performers. However, most training documents are then bounded support vectors sharing a same weight C. Thus, SVM reduce to a nearest mean classifier; this raises an interesting question on SVM merits in sparse bag of words feature spaces. Additionally, SVM suffer from performance deterioration for particular training set size/number of features combinations.