Learning rates of gradient descent algorithm for classification
Journal of Computational and Applied Mathematics
Limited stochastic meta-descent for kernel-based online learning
Neural Computation
Asymptotic efficiency of kernel support vector machines (SVM)
Cybernetics and Systems Analysis
Gradient learning in a classification setting by gradient descent
Journal of Approximation Theory
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
Learning theory viewpoint of approximation by positive linear operators
Computers & Mathematics with Applications
On complexity issues of online learning algorithms
IEEE Transactions on Information Theory
On Convergence of Kernel Learning Estimators
SIAM Journal on Optimization
Online crowdsourcing subjective image quality assessment
Proceedings of the 20th ACM international conference on Multimedia
Hi-index | 0.06 |
In this paper, we study an online learning algorithm in Reproducing Kernel Hilbert Spaces (RKHSs) and general Hilbert spaces. We present a general form of the stochastic gradient method to minimize a quadratic potential function by an independent identically distributed (i.i.d.) sample sequence, and show a probabilistic upper bound for its convergence.