On-line learning and stochastic approximations
On-line learning in neural networks
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
On-line learning for very large data sets: Research Articles
Applied Stochastic Models in Business and Industry - Statistical Learning
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
A dual coordinate descent method for large-scale linear SVM
Proceedings of the 25th international conference on Machine learning
Fast and Scalable Local Kernel Machines
The Journal of Machine Learning Research
Erratum: SGDQN is Less Careful than Expected
The Journal of Machine Learning Research
Tree Decomposition for Large-Scale SVM Problems
The Journal of Machine Learning Research
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
The Journal of Machine Learning Research
Fair and balanced: learning to present news stories
Proceedings of the fifth ACM international conference on Web search and data mining
An online AUC formulation for binary classification
Pattern Recognition
Structured Learning and Prediction in Computer Vision
Foundations and Trends® in Computer Graphics and Vision
Hierarchical linear support vector machine
Pattern Recognition
A spatial EA framework for parallelizing machine learning methods
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
An analysis of a spatial EA parallel boosting algorithm
Proceedings of the 15th annual conference on Genetic and evolutionary computation
JKernelMachines: a simple framework for kernel machine
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Hi-index | 0.00 |
The SGD-QN algorithm is a stochastic gradient descent algorithm that makes careful use of second-order information and splits the parameter update into independently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient descent but requires less iterations to achieve the same accuracy. This algorithm won the "Wild Track" of the first PASCAL Large Scale Learning Challenge (Sonnenburg et al., 2008).