A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Incremental Learning with Support Vector Machines
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Learn++: an incremental learning algorithm for supervised neuralnetworks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Evolving fuzzy neural networks for supervised/unsupervised onlineknowledge-based learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An experimental bias-variance analysis of SVM ensembles based on resampling techniques
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An adaptive classification system for video-based face recognition
Information Sciences: an International Journal
Hi-index | 0.00 |
Automatic pattern classifiers that allow for on-line incremental learning can adapt internal class models efficiently in response to new information without retraining from the start using all training data and without being subject to catastrophic forgeting. In this paper, the performance of the fuzzy ARTMAP neural network for supervised incremental learning is compared to that of supervised batch learning. An experimental protocole is presented to assess this network's potential for incremental learning of new blocks of training data, in terms of generalization error and resource requirements, using several synthetic pattern recognition problems. The advantages and drawbacks of training fuzzy ARTMAP incrementally are assessed for different data block sizes and data set structures. Overall results indicate that error rate of fuzzy ARTMAP is significantly higher when it is trained through incremental learning than through batch learning. As the size of training blocs decreases, the error rate acheived through incremental learning grows, but provides a more compact network using fewer training epochs. In the cases where the class distributions overlap, incremental learning shows signs of over-training. With a growing numbers of training patterns, the error rate grows while the compression reaches a plateau.