Statistical analysis with missing data
Statistical analysis with missing data
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
Incomplete-data classification using logistic regression
ICML '05 Proceedings of the 22nd international conference on Machine learning
Novel reconfigurable computing architectures for embedded high performance signal processing and numerical applications
On learning with dissimilarity functions
Proceedings of the 24th international conference on Machine learning
Learning from incomplete data with infinite imputations
Proceedings of the 25th international conference on Machine learning
Max-margin Classification of Data with Absent Features
The Journal of Machine Learning Research
An Incremental Feature Learning Algorithm Based on Least Square Support Vector Machine
FAW '08 Proceedings of the 2nd annual international workshop on Frontiers in Algorithmics
Hi-index | 0.00 |
This paper investigates the problem of learning classifiers from samples which have additional features and some of these additional features are absent due to noise or corruption of measurement. The common approach for handling missing features in discriminative models is to complete their unknown values with some methods firstly, and then use a standard classification procedure on the completed data. In this paper, an incremental Max-Margin Learning Algorithm is proposed to tackle with data which have additional features and some of these features are missing. We show how to use a max-margin learning framework to classify the incomplete data directly without any completing of the missing features. Based on the geometric interpretation of the margin, we formulate an objective function which aims to maximize the margin of each sample in its own relevant subspace. In this formulation, we make use of the structural parameters trained from existing features and optimize the structural parameters trained from additional features only. A two-step iterative procedure for solving the objective function is proposed. By avoiding the pre-processing phase in which the data is completed, our algorithm could offer considerable computational saving. Moreover, by using structural parameters trained from existing features and training the additional absent features only, our algorithm can save much training time. We demonstrate our results on a large number of standard benchmarks from UCI and the results show that our algorithm can achieve better or comparable classification accuracy.