Self-organizing maps
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Extended isomap for pattern classification
Eighteenth national conference on Artificial intelligence
Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Journal of Cognitive Neuroscience
Face recognition: a convolutional neural-network approach
IEEE Transactions on Neural Networks
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets
IEEE Transactions on Neural Networks
ViSOM - a novel method for multivariate data projection and structure visualization
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The self-organizing map (SOM) is a classical neural network method for dimensionality reduction and data visualization. Visualization induced SOM (ViSOM) and growing ViSOM (gViSOM) are two recently proposed variants for a more faithful, metric-based and direct data representation. They learn local quantitative distances of data by regularizing the inter-neuron contraction force while capturing the topology and minimizing the quantization error. In this paper we first review related dimension reduction methods, and then examine their capabilities for face recognition. The experiments were conducted on the ORL face database and the results show that both ViSOM and gViSOM significantly outperform SOM, PCA and related methods in terms of recognition error rate. In the training with five faces, the error rate of gViSOM dimension reduction followed by a soft k -NN classifier reaches as low as 2.1%, making ViSOM an efficient approach for data representation and dimensionality reduction.