Fundamentals of matrix computations
Fundamentals of matrix computations
Topology representing networks
Neural Networks
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Approximating the smallest grammar: Kolmogorov complexity in natural models
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Machine Learning
Self-Organizing Maps
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Robust locally linear embedding
Pattern Recognition
Robust non-linear dimensionality reduction using successive 1-dimensional Laplacian Eigenmaps
Proceedings of the 24th international conference on Machine learning
The Lempel-Ziv Complexity of Fixed Points of Morphisms
SIAM Journal on Discrete Mathematics
High-dimensional entropy estimation for finite accuracy data: R-NN entropy estimator
IPMI'07 Proceedings of the 20th international conference on Information processing in medical imaging
Local smoothing for manifold learning
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Riemannian manifold learning for nonlinear dimensionality reduction
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets
IEEE Transactions on Neural Networks
RBF principal manifolds for process monitoring
IEEE Transactions on Neural Networks
A Quick Assessment of Topology Preservation for SOM Structures
IEEE Transactions on Neural Networks
Dynamics of Generalized PCA and MCA Learning Algorithms
IEEE Transactions on Neural Networks
Locality-Preserved Maximum Information Projection
IEEE Transactions on Neural Networks
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
High-dimensional data is involved in many fields of information processing. However, sometimes, the intrinsic structures of these data can be described by a few degrees of freedom. To discover these degrees of freedom or the low-dimensional nonlinear manifold underlying a high-dimensional space, many manifold learning algorithms have been proposed. Here we describe a novel algorithm, locally linear inlaying (LLI), which combines simple geometric intuitions and rigorously established optimality to compute the global embedding of a nonlinear manifold. Using a divide-and-conquer strategy, LLI gains some advantages in itself. First, its time complexity is linear in the number of data points, and hence LLI can be implemented efficiently. Second, LLI overcomes problems caused by the nonuniform sample distribution. Third, unlike existing algorithms such as isometric feature mapping (Isomap), local tangent space alignment (LTSA), and locally linear coordination (LLC), LLI is robust to noise. In addition, to evaluate the embedding results quantitatively, two criteria based on information theory and Kolmogorov complexity theory, respectively, are proposed. Furthermore, we demonstrated the efficiency and effectiveness of our proposal by synthetic and real-world data sets.