Analyses on the Generalised Lotto-Type Competitive Learning
IDEAL '00 Proceedings of the Second International Conference on Intelligent Data Engineering and Automated Learning, Data Mining, Financial Engineering, and Intelligent Agents
Recurrent Algorithms for Selecting the Maximum Input
Neural Processing Letters
COMAX: A Cooperative Method for Determining the Position of the Maxima
Neural Processing Letters
Effects of widely separated clusters on lotto-type competitive learning with particle swarm features
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hi-index | 0.00 |
We propose a fast winner-take-all (WTA) neural network by dynamically accelerating the mutual inhibition among competitive neurons. The highest-threshold neural network (HITNET) with an accelerated factor is evolved from the general mean-based neural network, which adopts the mean of active neurons as the threshold of mutual inhibition. When the accelerated factor is optimally designed, the ideal HITNET statistically achieves the highest threshold for mutual inhibition. Both theoretical analyzes and simulation results demonstrate that the practical HITNET converges faster than the existing WTA networks for a large number of competitors