Quadratic optimization fine tuning for the learning phase of SVM

  • Authors:
  • Miguel González-Mendoza;Neil Hernández-Gress;André Titli

  • Affiliations:
  • LAAS-CNRS, Toulouse Cedex 4, France;ITESM-CEM, Atizapán de Zaragoza, Estado de México, México;INSA Toulouse, Toulouse Cedex 4, France

  • Venue:
  • ISSADS'05 Proceedings of the 5th international conference on Advanced Distributed Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a study of the Quadratic optimization Problem (QP) lying on the learning process of Support Vector Machines (SVM). Taking the Karush-Kuhn-Tucker (KKT) optimality conditions, we present the strategy of implementation of the SVM-QP following two classical approaches: i) active set, also divided in primal and dual spaces, methods and ii) interior point methods. We also present the general extension to treat large scale applications consisting in a general decomposition of the QP problem into smaller ones. In the same manner, we discuss some considerations to take into account to start the general learning process. We compare the performances of the optimization strategies using some well-known benchmark databases.