C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
General convergence results for linear discriminant updates
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Large margin classification using the perceptron algorithm
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
The Relaxed Online Maximum Margin Algorithm
Machine Learning
The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
On the Learnability and Design of Output Codes for Multiclass Problems
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
A new approximate maximal margin classification algorithm
The Journal of Machine Learning Research
Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm
SFCS '87 Proceedings of the 28th Annual Symposium on Foundations of Computer Science
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Constraint Classification: A New Approach to Multiclass Classification
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Online error correcting output codes
Pattern Recognition Letters
Hi-index | 0.00 |
In this paper we study online classification algorithms for multiclass problems in the mistake bound model. The hypotheses we use maintain one prototype vector per class. Given an input instance, a multiclass hypothesis computes a similarity-score between each prototype and the input instance and then sets the predicted label to be the index of the prototype achieving the highest similarity. To design and analyze the learning algorithms in this paper we introduce the notion of ultracon-servativeness. Ultraconservative algorithms are algorithms that update only the prototypes attaining similarity-scores which are higher than the score of the correct label's prototype. We start by describing a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores. We then discuss a specific online algorithm that seeks a set of prototypes which have a small norm. The resulting algorithm, which we term MIRA (for Margin Infused Relaxed Algorithm) is ultraconservative as well. We derive mistake bounds for all the algorithms and provide further analysis of MIRA using a generalized notion of the margin for multiclass problems.