Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Document clustering based on non-negative matrix factorization
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Linear manifold clustering in high dimensional spaces by stochastic search
Pattern Recognition
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Patch Alignment for Dimensionality Reduction
IEEE Transactions on Knowledge and Data Engineering
Approximately harmonic projection: Theoretical analysis and an algorithm
Pattern Recognition
Max-Min Distance Analysis by Using Sequential SDP Relaxation for Dimension Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Dimensionality reduction is an important preprocessing procedure in computer vision, pattern recognition, information retrieval, and data mining. In this paper we present a kernel method based on approximately harmonic projection (AHP), a recently proposed linear manifold learning method that has an excellent performance in clustering. The kernel matrix implicitly maps the data into a reproducing kernel Hilbert space (RKHS) and makes the structure of data more distinct, which distributes on nonlinear manifold. It retains and extends the advantages of its linear version and keeps the sensitive to the connected components. This makes the method particularly suitable for unsupervised clustering. Besides, this method can cover various classes of nonlinearities with different kernels. We experiment the new method on several well-known data sets to demonstrate its effectiveness. The results show that the new algorithm performs a good job and outperforms other classic algorithms on those data sets.