The Strength of Weak Learnability
Machine Learning
Machine Learning
AI Game Programming Wisdom
The Relaxed Online Maximum Margin Algorithm
Machine Learning
The Journal of Machine Learning Research
A new approximate maximal margin classification algorithm
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Collaborative prediction using ensembles of Maximum Margin Matrix Factorizations
ICML '06 Proceedings of the 23rd international conference on Machine learning
Multilingual dependency parsing using Bayes Point Machines
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Incremental margin algorithm for large margin classifiers
Neurocomputing
Confidence-weighted linear classification
Proceedings of the 25th international conference on Machine learning
Dependency parsing with reference to Slovene, Spanish and Swedish
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
The impact of parse quality on syntactically-informed statistical machine translation
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Optimal distributed online prediction using mini-batches
The Journal of Machine Learning Research
Confidence-weighted linear classification for text categorization
The Journal of Machine Learning Research
Adaptive regularization of weight vectors
Machine Learning
Hi-index | 0.00 |
We present a new and simple algorithm for learning large margin classifiers that works in a truly online manner. The algorithm generates a linear classifier by averaging the weights associated with several perceptron-like algorithms run in parallel in order to approximate the Bayes point. A random subsample of the incoming data stream is used to ensure diversity in the perceptron solutions. We experimentally study the algorithm's performance on online and batch learning settings. The online experiments showed that our algorithm produces a low prediction error on the training sequence and tracks the presence of concept drift. On the batch problems its performance is comparable to the maximum margin algorithm which explicitly maximises the margin.