The cascade-correlation learning architecture
Advances in neural information processing systems 2
An introduction to neural and electronic networks
A parallel Kohonen algorithm for the classification of large spatial datasets
Computers & Geosciences - Special issue: neural network applications in the geosciences
The age of spiritual machines: when computers exceed human intelligence
The age of spiritual machines: when computers exceed human intelligence
Parallelizing the Self-Organizing Feature Map on multiprocessor systems
Parallel Computing
Shape recovery by a generalized topology preserving SOM
Neurocomputing
IEEE Transactions on Neural Networks
TurSOM: a turing inspired self-organizing map
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Growing mechanisms and cluster identification with TurSOM
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Intruder data classification using GM-SOM
CISIM'12 Proceedings of the 11th IFIP TC 8 international conference on Computer Information Systems and Industrial Management
Hi-index | 0.01 |
Self-organizing maps (SOMs) have become popular for tasks in data visualization, pattern classification or natural language processing and can be seen as one of the major concepts for artificial neural networks of today. Their general idea is to approximate a high dimensional and previously unknown input distribution by a lower-dimensional neural network structure with the goal to model the topology of the input space as close as possible. Classical SOMs read the input values in random but sequential order one by one and thus adjust the network structure over space: the network will be built while reading larger and larger parts of the input. In contrast to this approach, we present a SOM that processes the whole input in parallel and organizes itself over time. The main reason for parallel input processing lies in the fact that knowledge can be used to recognize parts of patterns in the input space that have already been learned. This way, networks can be developed that do not reorganize their structure from scratch every time a new set of input vectors is presented, but rather adjust their internal architecture in accordance with previous mappings. One basic application could be a modeling of the whole-part relationship through layered architectures.