A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Algorithms for clustering data
Algorithms for clustering data
Vector quantization and signal compression
Vector quantization and signal compression
Direct search methods: Then and now
Direct search methods: Then and now
Neural Computation
A high-density and low-power charge-based Hamming network
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
The convergence of Hamming memory networks
IEEE Transactions on Neural Networks
A modular CMOS design of a Hamming network
IEEE Transactions on Neural Networks
Qualitative analysis of the parallel and asynchronous modes of the Hamming network
IEEE Transactions on Neural Networks
A new winners-take-all architecture in artificial neural networks
IEEE Transactions on Neural Networks
A single-iteration threshold Hamming network
IEEE Transactions on Neural Networks
A general mean-based iterative winner-take-all neural network
IEEE Transactions on Neural Networks
A biologically-inspired improved MAXNET
IEEE Transactions on Neural Networks
A Hamming Maxnet That Determines all the Maxima
SETN '08 Proceedings of the 5th Hellenic conference on Artificial Intelligence: Theories, Models and Applications
Hi-index | 0.00 |
In this paper the classical Hamming network is generalized in various ways. First, for the Hamming maxnet, a generalized model is proposed, which covers under its umbrella most of the existing versions of the Hamming Maxnet. The network dynamics are time varying while the commonly used ramp function may be replaced by a much more general non-linear function. Also, the weight parameters of the network are time varying. A detailed convergence analysis is provided. A bound on the number of iterations required for convergence is derived and its distribution functions are given for the cases where the initial values of the nodes of the Hamming maxnet stem from the uniform and the peak distributions. Stabilization mechanisms aiming to prevent the node(s) with the maximum initial value diverging to infinity or decaying to zero are described. Simulations demonstrate the advantages of the proposed extension. Also, a rough comparison between the proposed generalized scheme as well as the original Hamming maxnet and its variants is carried out in terms of the time required for convergence, in hardware implementations. Finally, the other two parts of the Hamming network, namely the competitors generating module and the decoding module, are briefly considered in the framework of various applications such as classification/clustering, vector quantization and function optimization.