Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Scale-sensitive dimensions, uniform convergence, and learnability
SFCS '93 Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science
Distortion Model Based on Word Sequence Labeling for Statistical Machine Translation
ACM Transactions on Asian Language Information Processing (TALIP)
Hi-index | 0.00 |
A number of learning tasks can be solved robustly using key concepts from statistical learning theory. In this paper we first summarize the main concepts of statistical learning theory, a framework in which certain learning from examples problems, namely classification, regression, and density estimation, have been studied in a principled way. We then show how the key concepts of the theory can be used not only for these standard learning from examples problems, but also for many others. In particular we discuss how to learn functions which model a preference relation. The goal is to illustrate the value of statistical learning theory beyond the standard framework it has been used until now.