Diffusion Learning and Regularization

  • Authors:
  • Marco Gori

  • Affiliations:
  • Dipartimento di Ingegneria dell'Informazione, University of Siena, Via Roma, 56 53100 Siena, Italy, marco@dii.unisi.it

  • Venue:
  • Proceedings of the 2009 conference on New Directions in Neural Networks: 18th Italian Workshop on Neural Networks: WIRN 2008
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most of the emphasis in machine learning has been placed on parametric models in which the purpose of the learning algorithm is to adjust weights mainly according to appropriate optimization criteria. However, schemes based on a direct data inference, like for instance K-nearest neighbor, have also become quite popular. Recently, a number of people have proposed methods to perform classification and regression that are based on different forms of diffusion processes from the labelled examples. The aim of this paper is to provide motivations for diffusion learning from the continuum setting by using Tikhnov's regularization framework. Diffusion learning is discussed in both the continuous and discrete setting and an intriguing link is established between the Green function of the regularization operators and the structure of the graph in the corresponding discrete structure. It is pointed out that an appropriate choice of the smoothing operators allows one to implement a regularization that gives rise to Green functions whose corresponding matrix is sparse, which imposes a corresponding structure on the graph associated to the training set. Finally, the choice of the smoothness operator is given a Bayesian interpretation in terms of prior probability on the expected values of the function.