A Theory of Networks for Approximation and Learning
A Theory of Networks for Approximation and Learning
Regularization on discrete spaces
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
Editorial: Partially supervised learning for pattern recognition
Pattern Recognition Letters
Hi-index | 0.00 |
Most of the emphasis in machine learning has been placed on parametric models in which the purpose of the learning algorithm is to adjust weights mainly according to appropriate optimization criteria. However, schemes based on a direct data inference, like for instance K-nearest neighbor, have also become quite popular. Recently, a number of people have proposed methods to perform classification and regression that are based on different forms of diffusion processes from the labelled examples. The aim of this paper is to provide motivations for diffusion learning from the continuum setting by using Tikhnov's regularization framework. Diffusion learning is discussed in both the continuous and discrete setting and an intriguing link is established between the Green function of the regularization operators and the structure of the graph in the corresponding discrete structure. It is pointed out that an appropriate choice of the smoothing operators allows one to implement a regularization that gives rise to Green functions whose corresponding matrix is sparse, which imposes a corresponding structure on the graph associated to the training set. Finally, the choice of the smoothness operator is given a Bayesian interpretation in terms of prior probability on the expected values of the function.