Analysis and synthesis of agents that learn from distributed dynamic data sources
Emergent neural computational architectures based on neuroscience
Analysis and Synthesis of Agents That Learn from Distributed Dynamic Data Sources
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
A Modular Neural Network Architecture with Approximation Capability and Its Applications
ICCI '03 Proceedings of the 2nd IEEE International Conference on Cognitive Informatics
Incremental personalized web page mining utilizing self-organizing HCMAC neural network
Web Intelligence and Agent Systems
Ensemble of SVMs for incremental learning
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
A novel local patch framework for fixing supervised learning models
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
We introduce a supervised learning algorithm that gives neural network classification algorithms the capability of learning incrementally from new data without forgetting what has been learned in earlier training sessions. Schapire's (1990) boosting algorithm, originally intended for improving the accuracy of weak learners, has been modified to be used in an incremental learning setting. The algorithm is based on generating a number of hypotheses using different distributions of the training data and combining these hypotheses using a weighted majority voting. This scheme allows the classifier previously trained with a training database, to learn from new data when the original data is no longer available, even when new classes are introduced. Initial results on incremental training of multilayer perceptron networks on synthetic as well as real-world data are presented in this paper.