The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining the results of several neural network classifiers
Neural Networks
Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
A note on batch and incremental learnability
Journal of Computer and System Sciences
A streaming ensemble algorithm (SEA) for large-scale classification
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Ensembling neural networks: many could be better than all
Artificial Intelligence
PAC Analogues of Perceptron and Winnow Via Boosting the Margin
Machine Learning
Computers and Operations Research
Combining Classifiers with Meta Decision Trees
Machine Learning
On the power of incremental learning
Theoretical Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
Direct and indirect algorithms for on-line learning of disjunctions
Theoretical Computer Science
Incremental Learning from Noisy Data
Machine Learning
Online Ensemble Learning: An Empirical Study
Machine Learning
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Incremental learning with partial instance memory
Artificial Intelligence
A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Incremental Learning of Ensemble Classifiers on ECG Data
CBMS '05 Proceedings of the 18th IEEE Symposium on Computer-Based Medical Systems
Online Handwritten Shape Recognition Using Segmental Hidden Markov Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
A note on the utility of incremental learning
AI Communications
Adaptive mixtures of local experts
Neural Computation
Dynamic Weighted Majority: An Ensemble Method for Drifting Concepts
The Journal of Machine Learning Research
An ensemble approach for incremental learning in nonstationary environments
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Ensemble confidence estimates posterior probability
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Incremental knowledge acquisition in supervised learning networks
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
On-line training of recurrent neural networks with continuous topology adaptation
IEEE Transactions on Neural Networks
Incremental backpropagation learning networks
IEEE Transactions on Neural Networks
Local linear perceptrons for classification
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Incremental learning methods with retrieving of interfered patterns
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
On the capability of accommodating new classes within probabilistic neural networks
IEEE Transactions on Neural Networks
Incremental training of support vector machines
IEEE Transactions on Neural Networks
Incremental Learning of Chunk Data for Online Pattern Classification Systems
IEEE Transactions on Neural Networks
A Constrained Optimization Approach to Preserving Prior Knowledge During Incremental Training
IEEE Transactions on Neural Networks
SERA: selectively recursive approach towards nonstationary imbalanced stream data mining
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Learn++.MF: A random subspace approach for the missing feature problem
Pattern Recognition
RAMOBoost: ranked minority oversampling in boosting
IEEE Transactions on Neural Networks
Online error correcting output codes
Pattern Recognition Letters
Non-uniform layered clustering for ensemble classifier generation and optimality
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Journal of Biomedical Informatics
Incremental learning of new classes in unbalanced datasets: Learn++.UDNC
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Hi-index | 0.00 |
We have previously introduced an incremental learning algorithm Learn++, which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn++ suffers from an inherent "outvoting" problem when asked to learn a new class ωnew introduced by a subsequent data set, as earlier classifiers not trained on this class are guaranteed to misclassify ωnew instances. The collective votes of earlier classifiers, for an inevitably incorrect decision, then outweigh the votes of the new classifiers' correct decision on ωnew instances--until there are enough new classifiers to counteract the unfair outvoting. This forces Learn++ to generate an unnecessarily large number of classifiers. This paper describes Learn++ .NC, specifically designed for efficient incremental learning of multiple New Classes using significantly fewer classifiers. To do so, Learn++ .NC introduces dynamically weighted consult and vote (DW-CAV), a novel voting mechanism for combining classifiers: individual classifiers consult with each other to determine which ones are most qualified to classify a given instance, and decide how much weight, if any, each classifier's decision should carry. Experiments on real-world problems indicate that the new algorithm performs remarkably well with substantially fewer classifiers, not only as compared to its predecessor Learn++, but also as compared to several other algorithms recently proposed for similar problems.