Machine Learning
A Second-Order Perceptron Algorithm
SIAM Journal on Computing
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Solving multiclass support vector machines with LaRank
Proceedings of the 24th international conference on Machine learning
Confidence-weighted linear classification
Proceedings of the 25th international conference on Machine learning
Efficient bandit algorithms for online multiclass prediction
Proceedings of the 25th international conference on Machine learning
Sequence Labelling SVMs Trained in One Pass
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Multi-class confidence weighted algorithms
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2
Micro-blogging Sentiment Detection by Collaborative Online Learning
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
The huller: a simple and efficient online SVM
ECML'05 Proceedings of the 16th European conference on Machine Learning
PAKDD'12 Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
A math-aware search engine for math question answering system
Proceedings of the 21st ACM international conference on Information and knowledge management
Adaptive two-view online learning for math topic classification
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Hi-index | 0.00 |
We propose a family of Passive-Aggressive Mahalanobis (PAM) algorithms, which are incremental (online) binary classifiers that consider the distribution of data. PAM is in fact a generalization of the Passive-Aggressive (PA) algorithms to handle data distributions that can be represented by a covariance matrix. The update equations for PAM are derived and theoretical error loss bounds computed. We benchmarked PAM against the original PA-I, PA-II, and ConfidenceWeighted (CW) learning. Although PAM somewhat resembles CWin its update equations, PA minimizes differences in the weights while CWminimizes differences in weight distributions. Results on 8 classification datasets, which include a real-lifemicro-blog sentiment classification task, show that PAM consistently outperformed its competitors, most notably CW. This shows that a simple approach like PAM is more practical in real-life classification tasks, compared to more sophisticated approaches like CW.