To feature space and back: Identifying top-weighted features in polynomial Support Vector Machine models

  • Authors:
  • Laura E. Brown;Ioannis Tsamardinos;Douglas P. Hardin

  • Affiliations:
  • Department of Computer Science, Michigan Technological University, Houghton, MI, USA;Department of Computer Science, University of Crete, Iraklio, Crete, Greece and Institute of Computer Science, Foundation for Research and Technology, Hellas, Greece;Department of Mathematics, Vanderbilt University, Nashville, TN, USA and Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA

  • Venue:
  • Intelligent Data Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Polynomial Support Vector Machine models of degree d are linear functions in a feature space of monomials of at most degree d. However, the actual representation is stored in the form of support vectors and Lagrange multipliers that is unsuitable for human understanding. An efficient, heuristic method for searching the feature space of a polynomial Support Vector Machine model for those features with the largest absolute weights is presented. The time complexity of this method is Θdms^2 + sdp, where m is the number of variables, d the degree of the kernel, s the number of support vectors, and p the number of features the algorithm is allowed to search. In contrast, the brute force approach of constructing all weights and then selecting the largest weights has complexity Θsd{{m+d}\choose {d}}. The method is shown to be effective in identifying the top-weighted features on several simulated data sets, where the true weight vector is known. Additionally, the method is run on several high-dimensional, real world data sets where the features returned may be used to construct classifiers with classification performances similar to models built with all or subsets of variables returned by variable selection methods. This algorithm provides a new ability to understand, conceptualize, visualize, and communicate polynomial SVM models and has implications for feature construction, dimensionality reduction, and variable selection.