A competitive distribution theory of neocortical dynamics
Neural Computation
Neural Networks
Cortical map reorganization as a competitive process
Neural Computation
Algorithms on strings, trees, and sequences: computer science and computational biology
Algorithms on strings, trees, and sequences: computer science and computational biology
GTM: the generative topographic mapping
Neural Computation
A Hierarchical Self-Organizing Map Model for Sequence Recognition
Neural Processing Letters
Swarm intelligence
A Recurrent Self-Organizing Map for Temporal Sequence Processing
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Speech Dimensionality Analysis on Hypercubical Self-Organizing Maps
Neural Processing Letters
Dynamics and Topographic Organization of Recursive Self-Organizing Maps
Neural Computation
Construction of a smooth multivariate simulation environment from a finite one-to-one mapping
Proceedings of the 46th Annual Southeast Regional Conference on XX
An unsupervised learning method for representing simple sentences
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Self organizing maps as models of social processes: the case of electoral preferences
WSOM'11 Proceedings of the 8th international conference on Advances in self-organizing maps
On non-markovian topographic organization of receptive fields in recursive self-organizing map
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
Recursive self-organizing map as a contractive iterative function system
IDEAL'05 Proceedings of the 6th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
We examine the extent to which modified Kohonen self-organizing maps (SOMs) can learn unique representations of temporal sequences while still supporting map formation. Two biologically inspired extensions are made to traditional SOMs: selection of multiple simultaneous rather than single "winners" and the use of local intramap connections that are trained according to a temporally asymmetric Hebbian learning rule. The extended SOM is then trained with variable-length temporal sequences that are composed of phoneme feature vectors, with each sequence corresponding to the phonetic transcription of a noun. The model transforms each input sequence into a spatial representation (final activation pattern on the map). Training improves this transformation by, for example, increasing the uniqueness of the spatial representations of distinct sequences, while still retaining map formation based on input patterns. The closeness of the spatial representations of two sequences is found to correlate significantly with the sequences' similarity. The extended model presented here raises the possibility that SOMs may ultimately prove useful as visualization tools for temporal sequences and as preprocessors for sequence pattern recognition systems.