Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Fast curvature matrix-vector products for second-order gradient descent
Neural Computation
A tutorial on support vector regression
Statistics and Computing
Iterative Kernel Principal Component Analysis for Image Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Foundations of Computational Mathematics
Neural Computation
Accelerated training of conditional random fields with stochastic gradient methods
ICML '06 Proceedings of the 23rd international conference on Machine learning
Fast stochastic optimization for articulated structure tracking
Image and Vision Computing
Step Size Adaptation in Reproducing Kernel Hilbert Space
The Journal of Machine Learning Research
Fast Iterative Kernel Principal Component Analysis
The Journal of Machine Learning Research
Online Gradient Descent Learning Algorithms
Foundations of Computational Mathematics
IEEE Transactions on Signal Processing
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
A kernel-based Perceptron with dynamic memory
Neural Networks
Hi-index | 0.00 |
To improve the single-run performance of online learning and reinforce its stability, we consider online learning with limited adaptive learning rate in this letter. The letter extends convergence proofs for NORMA to a range of step sizes, then employs support vector learning with stochastic meta-descent (SVMD) limited to that range for step size adaptation, so as to obtain an online kernel algorithm that combines theoretical convergence guarantees with good practical performance. Experiments on different data sets corroborate theoretical results well and show that our method is another promising way for online learning.