Algorithms for clustering data
Algorithms for clustering data
An Evaluation of Intrinsic Dimensionality Estimators
IEEE Transactions on Pattern Analysis and Machine Intelligence
Self-organizing maps
Intrinsic Dimensionality Estimation With Optimally Topology Preserving Maps
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Non-linear dimensionality reduction techniques for classification and visualization
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets
IEEE Transactions on Neural Networks
Building connected neighborhood graphs for isometric data embedding
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Building k-Connected Neighborhood Graphs for Isometric Data Embedding
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data embedding techniques and applications
Proceedings of the 2nd international workshop on Computer vision meets databases
Probabilistic PCA self-organizing maps
IEEE Transactions on Neural Networks
Distance approximating dimension reduction of Riemannian manifolds
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Improvement of data visualization based on ISOMAP
MICAI'05 Proceedings of the 4th Mexican international conference on Advances in Artificial Intelligence
Parsimonious Mahalanobis kernel for the classification of high dimensional data
Pattern Recognition
Hi-index | 0.14 |
A distance-preserving method is presented to map high-dimensional data sequentially to low-dimensional space. It preserves exact distances of each data point to its nearest neighbor and to some other near neighbors. Intrinsic dimensionality of data is estimated by examining the preservation of interpoint distances. The method has no user-selectable parameter. It can successfully project data when the data points are spread among multiple clusters. Results of experiments show its usefulness in projecting high-dimensional data.