Bidirectional associative memories
IEEE Transactions on Systems, Man and Cybernetics
Dynamic heteroassociative neural memories
Neural Networks
On the K-winners-take-all-network
Advances in neural information processing systems 1
Neural Networks and Natural Intelligence
Neural Networks and Natural Intelligence
Learning to Predict by the Methods of Temporal Differences
Machine Learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Unsupervised learning in noise
IEEE Transactions on Neural Networks
Sensitivity to noise in bidirectional associative memory (BAM)
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A bidirectional heteroassociative memory for binary and grey-level patterns
IEEE Transactions on Neural Networks
A perceptual memory system for affordance learning in humanoid robots
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Hi-index | 0.00 |
In this paper, we present a new recurrent bidirectional model that encompasses correlational, competitive and topological model properties. The simultaneous use of many classes of network behaviors allows for the unsupervised learning/categorization of perceptual patterns (through input compression) and the concurrent encoding of proximities in a multidimensional space. All of these operations are achieved within a common learning operation, and using a single set of defining properties. It is shown that the model can learn categories by developing prototype representations strictly from exposition to specific exemplars. Moreover, because the model is recurrent, it can reconstruct perfect outputs from incomplete and noisy patterns. Empirical exploration of the model's properties and performance shows that its ability for adequate clustering stems from: (1) properly distributing connection weights, and (2) producing a weight space with a low dispersion level (or higher density). In addition, since the model uses a sparse representation (k-winners), the size of topological neighborhood can be fixed, and no longer requires a decrease through time as was the case with classic self-organizing feature maps. Since the model's learning and transmission parameters are independent from learning trials, the model can develop stable fixed points in a constrained topological architecture, while being flexible enough to learn novel patterns.