Vector quantization and signal compression
Vector quantization and signal compression
The nature of statistical learning theory
The nature of statistical learning theory
Matrix computations (3rd ed.)
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Accelerating exact k-means algorithms with geometric reasoning
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast computation of low rank matrix approximations
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
An Efficient k-Means Clustering Algorithm: Analysis and Implementation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Data-Clustering Algorithm on Distributed Memory Multiprocessors
Revised Papers from Large-Scale Parallel Data Mining, Workshop on Large-Scale Parallel KDD Systems, SIGKDD
The Effect of the Input Density Distribution on Kernel-based Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Fast Monte-Carlo Algorithms for finding low-rank approximations
FOCS '98 Proceedings of the 39th Annual Symposium on Foundations of Computer Science
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel independent component analysis
The Journal of Machine Learning Research
Spectral Grouping Using the Nyström Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linearized cluster assignment via spectral ordering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Applying Neighborhood Consistency for Fast Clustering and Kernel Density Estimation
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix
SIAM Journal on Computing
Block-quantized kernel matrix for fast spectral embedding
ICML '06 Proceedings of the 23rd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Sparse multiscale gaussian process regression
Proceedings of the 25th international conference on Machine learning
Density-weighted nyström method for computing large kernel eigensystems
Neural Computation
On sampling-based approximate spectral decomposition
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Prototype vector machine for large scale semi-supervised learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An experimental evaluation of a Monte-Carlo algorithm for singular value decomposition
PCI'01 Proceedings of the 8th Panhellenic conference on Informatics
Parallel Spectral Clustering in Distributed Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Randomized Algorithms for Matrices and Data
Foundations and Trends® in Machine Learning
Data analysis of (non-)metric proximities at linear costs
SIMBAD'13 Proceedings of the Second international conference on Similarity-Based Pattern Recognition
The Journal of Machine Learning Research
Improving CUR matrix decomposition and the Nyström approximation via adaptive sampling
The Journal of Machine Learning Research
Hi-index | 0.00 |
Kernel (or similarity) matrix plays a key role in many machine learning algorithms such as kernel methods, manifold learning, and dimension reduction. However, the cost of storing and manipulating the complete kernel matrix makes it infeasible for large problems. The Nyström method is a popular sampling-based low-rank approximation scheme for reducing the computational burdens in handling large kernel matrices. In this paper, we analyze how the approximating quality of the Nyström method depends on the choice of landmark points, and in particular the encoding powers of the landmark points in summarizing the data. Our (non-probabilistic) error analysis justifies a "clustered Nyström method" that uses the k-means clustering centers as landmark points. Our algorithm can be applied to scale up a wide variety of algorithms that depend on the eigenvalue decomposition of kernel matrix (or its variant), such as kernel principal component analysis, Laplacian eigenmap, spectral clustering, as well as those involving kernel matrix inverse such as least-squares support vector machine and Gaussian process regression. Extensive experiments demonstrate the competitive performance of our algorithm in both accuracy and efficiency.