Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Digital Picture Processing
Digital Image Restoration
The subspace information criterion for infinite dimensional hypothesis spaces
The Journal of Machine Learning Research
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Non-linear Wiener filter in reproducing kernel Hilbert space
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Pattern recognition by kernel Wiener filter
SPPRA '08 Proceedings of the Fifth IASTED International Conference on Signal Processing, Pattern Recognition and Applications
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
IEEE Transactions on Signal Processing
Relative Karhunen-Loeve transform
IEEE Transactions on Signal Processing
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
An Information Theoretic Approach of Designing Sparse Kernel Adaptive Filters
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.