Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Non-Linear Dimensionality Reduction
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Objective Functions for Neural Map Formation
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Data Mining and Knowledge Discovery Handbook
Data Mining and Knowledge Discovery Handbook
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimension reduction of microarray data based on local tangent space alignment
ICCI '05 Proceedings of the Fourth IEEE International Conference on Cognitive Informatics
An introduction to nonlinear dimensionality reduction by maximum variance unfolding
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Estimating the Embedding Dimension Distribution of Time Series with SOMOS
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Parallel rare term vector replacement: Fast and effective dimensionality reduction for text
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
Dimensionality Reduction is a key issue in many scientific problems, in which data is originally given by high dimensional vectors, all of which lie however over a fewer dimensional manifold. Therefore, they can be represented by a reduced number of values that parametrize their position over the mentioned non-linear manifold. This dimensionality reduction is essential not only for representing and managing data, but also for its understanding at a high interpretation level, similar to the way it is performed by the mammal cortex. This paper presents an algorithm for representing the data that lie on a non-linear manifold by the reduced number of their coordinates along a grid or map of neurons extended over this manifold. This map is generated by a Self-organization learning process whose key feature is the fact that the winning neuron is selected in order to preserve distances of input data when they are represented by their coordinates in the output map. Unlike other methods, the proposed algorithm has important features, that namely the intrinsic dimensionality is obtained simultaneously in the learning process itself, it doesn't require a long course positioning phase, and it seeks to maintain the data structure from the beginning, not leaving it as an ulterior fact to be proven. The algorithm has proven to efficiently solve classical dimensionality reduction problems, and has also showed that it can be useful for realistic problems, such as face images classification or document indexing.