Estimating the Intrinsic Dimension of Data with a Fractal-Based Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Topology Representing Networks for Intrinsic Dimensionality Estimation
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Selection of the optimal parameter value for the Isomap algorithm
Pattern Recognition Letters
Parameterless isomap with adaptive neighborhood selection
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
ISOMAP, LLE, Laplacian Eigenmaps and LTSA are several representative manifold learning algorithms. In most of manifold learning methods, there are two free parameters: the neighborhood size and the intrinsic dimension of the high dimensional data set. In this paper, we analyze and compare the stress function, the residual variance and the dy-dx representation. On the basis of the dy-dx representation, a quantitative measure based on the variance of distance ratios is used to determine these two parameters, which overcomes faults of the stress function and the residual variance. Experiments show that the model can be utilized not only to choose an appropriate neighborhood size but also to estimate the intrinsic dimension of the high dimensional complex data for different manifold learning techniques.