Neocognitron capable of incremental learning
Neural Networks
Ordered incremental training for GA-based classifiers
Pattern Recognition Letters
International Journal of Hybrid Intelligent Systems
Evolving logic networks with real-valued inputs for fast incremental learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
IEEE Transactions on Neural Networks
Meta-learning for fast incremental learning
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Incremental learning of spatio-temporal patterns with model selection
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Covariate shift and incremental learning
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Incremental model selection and ensemble prediction under virtual concept drifting environments
PRICAI'10 Proceedings of the 11th Pacific Rim international conference on Trends in artificial intelligence
Radial Basis Function Network for Multitask Pattern Recognition
Neural Processing Letters
A bounded version of online boosting on open-ended data streams
DaWaK'11 Proceedings of the 13th international conference on Data warehousing and knowledge discovery
Incremental Hyperplane Partitioning for Classification
International Journal of Applied Evolutionary Computation
Hi-index | 0.00 |
There are many cases when a neural-network-based system must memorize some new patterns incrementally. However, if the network learns the new patterns only by referring to them, it probably forgets old memorized patterns, since parameters in the network usually correlate not only to the old memories but also to the new patterns. A certain way to avoid the loss of memories is to learn the new patterns with all memorized patterns. It needs, however, a large computational power. To solve this problem, we propose incremental learning methods with retrieval of interfered patterns (ILRI). In these methods, the system employs a modified version of a resource allocating network (RAN) which is one variation of a generalized radial basis function (GRBF). In ILRI, the RAN learns new patterns with a relearning of a few number of retrieved past patterns that are interfered with the incremental learning. We construct ILRI in two steps. In the first step, we construct a system which searches the interfered patterns from past input patterns stored in a database. In the second step, we improve the first system in such a way that the system does not need the database. In this case, the system regenerates the input patterns approximately in a random manner. The simulation results show that these two systems have almost the same ability, and the generalization ability is higher than other similar systems using neural networks and k-nearest neighbors