1-norm support vector novelty detection and its sparseness

  • Authors:
  • Li Zhang;Weida Zhou

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective.