Nonlinear functional analysis and its applications
Nonlinear functional analysis and its applications
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Regularization theory and neural networks architectures
Neural Computation
Machine Learning
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
An equivalence between sparse approximation and support vector machines
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Error Estimates for Approximate Optimization by the Extended Ritz Method
SIAM Journal on Optimization
Learning with generalization capability by kernal methods of bounded complexity
Journal of Complexity
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Numerical Methods in Scientific Computing: Volume 1
Numerical Methods in Scientific Computing: Volume 1
Bounds on rates of variable-basis and neural-network approximation
IEEE Transactions on Information Theory
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
On the eigenspectrum of the gram matrix and the generalization error of kernel-PCA
IEEE Transactions on Information Theory
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
A support vector machine formulation to PCA analysis and its kernel version
IEEE Transactions on Neural Networks
Computational Optimization and Applications
On spectral windows in supervised learning from data
Information Processing Letters
Bounds for approximate solutions of Fredholm integral equations using kernel networks
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Hi-index | 0.00 |
For Principal Component Analysis in Reproducing Kernel Hilbert Spaces (KPCA), optimization over sets containing only linear combinations of all n-tuples of kernel functions is investigated, where n is a positive integer smaller than the number of data. Upper bounds on the accuracy in approximating the optimal solution, achievable without restrictions on the number of kernel functions, are derived. The rates of decrease of the upper bounds for increasing number n of kernel functions are given by the summation of two terms, one proportional to n 驴1/2 and the other to n 驴1, and depend on the maximum eigenvalue of the Gram matrix of the kernel with respect to the data. Primal and dual formulations of KPCA are considered. The estimates provide insights into the effectiveness of sparse KPCA techniques, aimed at reducing the computational costs of expansions in terms of kernel units.