The visual display of quantitative information
The visual display of quantitative information
Treemaps: visualizing hierarchical and categorical data
Treemaps: visualizing hierarchical and categorical data
Visualizing multi-dimensional clusters, trends, and outliers using star coordinates
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Visualizing Data
Designing Pixel-Oriented Visualization Techniques: Theory and Applications
IEEE Transactions on Visualization and Computer Graphics
Non-linear dimensionality reduction techniques for classification and visualization
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Exploring N-dimensional databases
VIS '90 Proceedings of the 1st conference on Visualization '90
Parallel coordinates: a tool for visualizing multi-dimensional geometry
VIS '90 Proceedings of the 1st conference on Visualization '90
Efficient locally linear embeddings of imperfect manifolds
MLDM'03 Proceedings of the 3rd international conference on Machine learning and data mining in pattern recognition
Artificial neural networks for feature extraction and multivariate data projection
IEEE Transactions on Neural Networks
Using graph algebra to optimize neighborhood for isometric mapping
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Clustering-based nonlinear dimensionality reduction on manifold
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
An effective double-bounded tree-connected Isomap algorithm for microarray data classification
Pattern Recognition Letters
Hi-index | 0.00 |
The ISOMAP algorithm has recently emerged as a promising dimensionality reduction technique to reconstruct nonlinear low-dimensional manifolds from the data embedded in high-dimensional spaces, by which the high-dimensional data can be visualized nicely. One of its advantages is that only one parameter is required, i.e. the neighborhood size or K in the K nearest neighbors method, on which the success of the ISOMAP algorithm depends. However, it's an open problem how to select a suitable neighborhood size. In this paper, we present an effective method to select a suitable neighborhood size, which is much less time-consuming than the straightforward method with the residual variance, while yielding the same results. In addition, based on the characteristics of the Euclidean distance metric, a faster Dijkstra-like shortest path algorithm is used in our method. Finally, our method can be verified by experimental results very well.