A routine for converting regression algorithms into corresponding orthogonal regression algorithms
ACM Transactions on Mathematical Software (TOMS)
The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
The Relaxed Online Maximum Margin Algorithm
Machine Learning
A new approximate maximal margin classification algorithm
The Journal of Machine Learning Research
Perceptual audio modeling with exponentially damped sinusoids
Signal Processing - Content-based image and video retrieval
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Overview of total least-squares methods
Signal Processing
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
Incremental margin algorithm for large margin classifiers
Neurocomputing
Joint photometric and geometric image registration in the total least square sense
Pattern Recognition Letters
Total least square kernel regression
Journal of Visual Communication and Image Representation
IEEE Transactions on Signal Processing
Image denoising using total least squares
IEEE Transactions on Image Processing
Hi-index | 0.10 |
In this paper, we introduce a new online algorithm for orthogonal regression. The method is constructed via an stochastic gradient descent approach combined with the idea of a tube loss function, which is similar to the one used in support vector (SV) regression. The algorithm can be used in primal or in dual variables. The latter formulation allows the introduction of kernels and soft margins. In addition, an incremental strategy algorithm is introduced, which can be used to find sparse solutions and also an approximation to the ''minimal tube'' containing the data. The algorithm is very simple to implement and avoids quadratic optimization.