A novel feature extraction method and hybrid tree classification for handwritten numeral recognition
Pattern Recognition Letters
The Principal Components Analysis Self-Organizing Map
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Learning multiple linear manifolds with self-organizing networks
International Journal of Parallel, Emergent and Distributed Systems
Neural Networks
Information Maximization in a Linear Manifold Topographic Map
Neural Processing Letters
Probabilistic PCA self-organizing maps
IEEE Transactions on Neural Networks
Invariant feature set generation with the linear manifold self-organizing map
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part IV
Handwritten digit recognition with kernel-based LVQ classifier in input space
ISNN'05 Proceedings of the Second international conference on Advances in neural networks - Volume Part II
On the basis updating rule of adaptive-subspace self-organizing map (ASSOM)
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Objectionable image detection by ASSOM competition
CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
Hi-index | 0.00 |
The adaptive-subspace self-organizing map (ASSOM) proposed by Kohonen is a recent development in self-organizing map (SOM) computation. In this paper, we propose a method to realize ASSOM using a neural learning algorithm in nonlinear autoencoder networks. Our method has the advantage of numerical stability. We have applied our ASSOM model to build a modular classification system for handwritten digit recognition. Ten ASSOM modules are used to capture different features in the ten classes of digits. When a test digit is presented to all the modules, each module provides a reconstructed pattern and the system outputs a class label by comparing the ten reconstruction errors. Our experiments show promising results. For relatively small size modules, the classification accuracy reaches 99.3% on the training set and over 97% on the testing set