Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Competitive learning algorithms for vector quantization
Neural Networks
Parallel, self-organizing, hierarchical neural networks with fuzzy input signal representation, competitive learning and safe rejection schemes
Fundamentals of neural networks: architectures, algorithms, and applications
Fundamentals of neural networks: architectures, algorithms, and applications
Hi-index | 0.00 |
In this paper, a new competitive learning algorithm called Dynamic Competitive Learning (DCL) is presented. DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results demon- strate the superiority of DCL in comparison to the conventional competitive learning methods.