Communications of the ACM
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
A formal analysis of stopping criteria of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Stochastic methods for l1 regularized loss minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
A simpler unified analysis of budget perceptrons
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
CompositeMap: a novel framework for music similarity measure
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Online EM for unsupervised models
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Bundle Methods for Regularized Risk Minimization
The Journal of Machine Learning Research
Effective music tagging through advanced statistical modeling
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Multitask Sparsity via Maximum Entropy Discrimination
The Journal of Machine Learning Research
Selective block minimization for faster convergence of limited memory large-scale linear models
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Differentially Private Empirical Risk Minimization
The Journal of Machine Learning Research
Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation
The Journal of Machine Learning Research
Stochastic Methods for l1-regularized Loss Minimization
The Journal of Machine Learning Research
Multi kernel learning with online-batch optimization
The Journal of Machine Learning Research
Learning Kernel-Based Halfspaces with the 0-1 Loss
SIAM Journal on Computing
Hierarchical linear support vector machine
Pattern Recognition
Constrained stochastic gradient descent for large-scale least squares problem
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Stochastic dual coordinate ascent methods for regularized loss
The Journal of Machine Learning Research
Social Link Prediction in Online Social Tagging Systems
ACM Transactions on Information Systems (TOIS)
Compressed classification learning with Markov chain samples
Neural Networks
Hi-index | 0.00 |
We discuss how the runtime of SVM optimization should decrease as the size of the training data increases. We present theoretical and empirical results demonstrating how a simple subgradient descent approach indeed displays such behavior, at least for linear kernels.