Optimal algorithms for approximate clustering
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Very fast EM-based mixture model clustering using multiresolution kd-trees
Proceedings of the 1998 conference on Advances in neural information processing systems II
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast computation of low rank matrix approximations
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
The Effect of the Input Density Distribution on Kernel-based Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel independent component analysis
The Journal of Machine Learning Research
Spectral Grouping Using the Nyström Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Spectral Segmentation with Multiscale Graph Decomposition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Improved Nyström low-rank approximation and error analysis
Proceedings of the 25th international conference on Machine learning
Density-weighted nyström method for computing large kernel eigensystems
Neural Computation
Block-quantized support vector ordinal regression
IEEE Transactions on Neural Networks
A speed-up algorithm for Poisson propagation
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Clustered Nyström method for large scale manifold learning and dimension reduction
IEEE Transactions on Neural Networks
Multi-level Low-rank Approximation-based Spectral Clustering for image segmentation
Pattern Recognition Letters
Hi-index | 0.00 |
Eigendecomposition of kernel matrix is an indispensable procedure in many learning and vision tasks. However, the cubic complexity O(N3) is impractical for large problem, where N is the data size. In this paper, we propose an efficient approach to solve the eigendecomposition of the kernel matrix W. The idea is to approximate W with W that is composed of m2 constant blocks. The eigenvectors of W, which can be solved in O(m3) time, is then used to recover the eigenvectors of the original kernel matrix. The complexity of our method is only O(mN + m3), which scales more favorably than state-of-the-art low rank approximation and sampling based approaches (O(m2N + m3)), and the approximation quality can be controlled conveniently. Our method demonstrates encouraging scaling behaviors in experiments of image segmentation (by spectral clustering) and kernel principal component analysis.