A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Support vector machines are universally consistent
Journal of Complexity
Estimation of Dependences Based on Empirical Data: Empirical Inference Science (Information Science and Statistics)
Enhanced default risk models with SVM+
Expert Systems with Applications: An International Journal
Boosting with side information
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
In this paper we consider a new paradigm of learning: learning using hidden information. The classical paradigm of the supervised learning is to learn a decision rule from labeled data (xi, yi), xi ∈ X, xi ∈ X, yi ∈ {-1, 1}, i = 1, ..., l. In this paper we consider a new setting: given training vectors in space X along with labels and description of this data in another space X*, find in space X a decision rule better than the one found in the classical paradigm.