Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
Evolving Fuzzy Rules with UCS: Preliminary Results
Learning Classifier Systems
Stochastic methods for l1 regularized loss minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Optimal Online Learning Procedures for Model-Free Policy Evaluation
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent
The Journal of Machine Learning Research
Document assignment in multi-site search engines
Proceedings of the fourth ACM international conference on Web search and data mining
On-line learning: where are we so far?
Ubiquitous knowledge discovery
On-line learning: where are we so far?
Ubiquitous knowledge discovery
Stochastic Methods for l1-regularized Loss Minimization
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Collaborative filtering via group-structured dictionary learning
LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Hi-index | 0.00 |
The design of very large learning systems presents many unsolved challenges. Consider, for instance, a system that ‘watches’ television for a few weeks and learns to enumerate the objects present in these images. Most current learning algorithms do not scale well enough to handle such massive quantities of data. Experience suggests that the stochastic learning algorithms are best suited to such tasks. This is at first surprising because stochastic learning algorithms optimize the training error rather slowly. Our paper reconsiders the convergence speed in terms of how fast a learning algorithm optimizes the testing error. This reformulation shows the superiority of the well designed stochastic learning algorithm. Copyright © 2005 John Wiley & Sons, Ltd.