A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Shrinking the tube: a new support vector regression algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Support vector interval regression networks for interval regression analysis
Fuzzy Sets and Systems - Theme: Learning and modeling
Heteroscedastic Gaussian process regression
ICML '05 Proceedings of the 22nd international conference on Machine learning
Neural Computation
Most likely heteroscedastic Gaussian process regression
Proceedings of the 24th international conference on Machine learning
Using neural networks to model conditional multivariate densities
Neural Computation
Support vector interval regression machine for crisp input and output data
Fuzzy Sets and Systems
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Relaxed constraints support vector machine
Expert Systems: The Journal of Knowledge Engineering
Twin least squares support vector regression
Neurocomputing
Least squares twin parametric-margin support vector machine for classification
Applied Intelligence
Hi-index | 0.00 |
In this paper, a modification of v-support vector machines (v-SVM) for regression and classification is described, and the use of a parametric insensitive/margin model with an arbitrary shape is demonstrated. This can be useful in many cases, especially when the noise is heteroscedastic, that is, the noise strongly depends on the input value x. Like the previous v-SVM, the proposed support vector algorithms have the advantage of using the parameter 0@?v@?1 for controlling the number of support vectors. To be more precise, v is an upper bound on the fraction of training errors and a lower bound on the fraction of support vectors. The algorithms are analyzed theoretically and experimentally.