A Max-Margin Learning Algorithm with Additional Features

  • Authors:
  • Xinwang Liu;Jianping Yin;En Zhu;Yubin Zhan;Miaomiao Li;Changwang Zhang

  • Affiliations:
  • School of Computer Science, National University of Defense Technology, Changsha, China 410073;School of Computer Science, National University of Defense Technology, Changsha, China 410073;School of Computer Science, National University of Defense Technology, Changsha, China 410073;School of Computer Science, National University of Defense Technology, Changsha, China 410073;College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China 650216;School of Computer Science, National University of Defense Technology, Changsha, China 410073

  • Venue:
  • FAW '09 Proceedings of the 3d International Workshop on Frontiers in Algorithmics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the problem of learning classifiers from samples which have additional features and some of these additional features are absent due to noise or corruption of measurement. The common approach for handling missing features in discriminative models is to complete their unknown values with some methods firstly, and then use a standard classification procedure on the completed data. In this paper, an incremental Max-Margin Learning Algorithm is proposed to tackle with data which have additional features and some of these features are missing. We show how to use a max-margin learning framework to classify the incomplete data directly without any completing of the missing features. Based on the geometric interpretation of the margin, we formulate an objective function which aims to maximize the margin of each sample in its own relevant subspace. In this formulation, we make use of the structural parameters trained from existing features and optimize the structural parameters trained from additional features only. A two-step iterative procedure for solving the objective function is proposed. By avoiding the pre-processing phase in which the data is completed, our algorithm could offer considerable computational saving. Moreover, by using structural parameters trained from existing features and training the additional absent features only, our algorithm can save much training time. We demonstrate our results on a large number of standard benchmarks from UCI and the results show that our algorithm can achieve better or comparable classification accuracy.