Computational geometry: an introduction
Computational geometry: an introduction
Iterative solution of linear systems in the 20th century
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 Vol. III: linear algebra
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Fast Solution of the Radial Basis Function Interpolation Equations: Domain Decomposition Methods
SIAM Journal on Scientific Computing
A Matrix Version of the Fast Multipole Method
SIAM Review
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel conjugate gradient for fast kernel machines
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Spectral algorithms for supervised learning
Neural Computation
Sparse approximation through boosting for learning large scale kernel machines
IEEE Transactions on Neural Networks
Laplacian Support Vector Machines Trained in the Primal
The Journal of Machine Learning Research
A framework for evaluating approximation methods for Gaussian process regression
The Journal of Machine Learning Research
Hi-index | 0.00 |
The advances in kernel-based learning necessitate the study on solving a large-scale non-sparse positive definite linear system. To provide a deterministic approach, recent researches focus on designing fast matrix-vector multiplication techniques coupled with a conjugate gradient method. Instead of using the conjugate gradient method, our paper proposes to use a domain decomposition approach in solving such a linear system. Its convergence property and speed can be understood within von Neumann's alternating projection framework. We will report signi ficant and consistent improvements in convergence speed over the conjugate gradient method when the approach is applied to recent machine learning problems.