Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Open Systems & Information Dynamics
On-Line Estimation of Hidden Markov Model Parameters
DS '00 Proceedings of the Third International Conference on Discovery Science
Dynamic Classifier Selection for Effective Mining from Noisy Data Streams
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Exploring bit-difference for approximate KNN search in high-dimensional databases
ADC '05 Proceedings of the 16th Australasian database conference - Volume 39
From dynamic classifier selection to dynamic ensemble selection
Pattern Recognition
Online learning with hidden markov models
Neural Computation
Pattern Analysis & Applications
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
Combining incremental Hidden Markov Model and Adaboost algorithm for anomaly intrusion detection
Proceedings of the ACM SIGKDD Workshop on CyberSecurity and Intelligence Informatics
Incremental estimation of discrete hidden Markov models based on a new backward procedure
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Neural Networks
Artificial Intelligence Review
Adaptive Incremental Learning with an Ensemble of Support Vector Machines
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Dynamic ensemble selection for off-line signature verification
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Convolutional Neural Network Committees for Handwritten Character Classification
ICDAR '11 Proceedings of the 2011 International Conference on Document Analysis and Recognition
Dynamic selection of ensembles of classifiers using contextual information
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.01 |
In this work, we propose the LoGID (Local and Global Incremental Learning for Dynamic Selection) framework, the main goal of which is to adapt hidden Markov model-based pattern recognition systems during both the generalization and learning phases. Given that the baseline system is composed of a pool of base classifiers, adaptation during generalization is performed through the dynamic selection of the members of this pool that best recognize each test sample. This is achieved by the proposed K-nearest output profiles algorithm, while adaptation during learning consists of gradually updating the knowledge embedded in the base classifiers, by processing previously unobserved data. This phase employs two types of incremental learning: local and global. Local incremental learning involves updating the pool of base classifiers by adding new members to this set. The new members are created with the Learn++ algorithm. Global incremental learning, in contrast, consists of updating the set of output profiles used during generalization. The proposed framework has been evaluated on a diversified set of databases. The results indicate that LoGID is promising. For most databases, the recognition rates achieved by the proposed method are higher than those achieved by other state-of-the-art approaches, such as batch learning. Furthermore, the simulated incremental learning setting demonstrates that LoGID can effectively improve the performance of systems created with small training sets as more data are observed over time.