Additive versus exponentiated gradient updates for linear prediction
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
A new approximate maximal margin classification algorithm
The Journal of Machine Learning Research
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Ultraconservative online algorithms for multiclass problems
The Journal of Machine Learning Research
Convex Optimization
Prediction, Learning, and Games
Prediction, Learning, and Games
Online multiclass learning by interclass hypothesis sharing
ICML '06 Proceedings of the 23rd international conference on Machine learning
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Solving multiclass support vector machines with LaRank
Proceedings of the 24th international conference on Machine learning
Tracking the best hyperplane with a simple budget Perceptron
Machine Learning
A primal-dual perspective of online learning algorithms
Machine Learning
The Forgetron: A Kernel-Based Perceptron on a Budget
SIAM Journal on Computing
Confidence-weighted linear classification
Proceedings of the 25th international conference on Machine learning
The projectron: a bounded kernel-based Perceptron
Proceedings of the 25th international conference on Machine learning
Online learning meets optimization in the dual
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Loss bounds for online category ranking
COLT'05 Proceedings of the 18th annual conference on Learning Theory
On the generalization ability of on-line learning algorithms
IEEE Transactions on Information Theory
Online feature selection for mining big data
Proceedings of the 1st International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications
BDUOL: double updating online learning on a fixed budget
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Online Multiple Kernel Classification
Machine Learning
Confidence Weighted Mean Reversion Strategy for Online Portfolio Selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Cost-sensitive online active learning with application to malicious URL detection
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Online multimodal deep similarity learning with application to image retrieval
Proceedings of the 21st ACM international conference on Multimedia
Large scale online kernel classification
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
In most kernel based online learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new online learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed online learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed online learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class online learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art online learning algorithms. The source code is available to public at http://www.cais.ntu.edu.sg/~chhoi/DUOL/.