On information regularization

  • Authors:
  • Adrian Corduneanu;Tommi Jaakkola

  • Affiliations:
  • Artificial Intelligence Laboratory, Massachussetts Institute of Technology, Cambridge, MA;Artificial Intelligence Laboratory, Massachussetts Institute of Technology, Cambridge, MA

  • Venue:
  • UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We formulate a principle for classification with the knowledge of the marginal distribution over the data points (unlabeled data). The principle is cast in terms of Tikhonov style regularization where the regularization penalty articulates the way in which the marginal density should constrain otherwise unrestricted conditional distributions. Specifically, the regularization penalty penalizes any information introduced between the examples and labels beyond what is provided by the available labeled examples. The work extends (Szummer and Jaakkola, 2003) to multiple dimensions, providing a regularizer independent of the covering of the space used in the derivation. In addition we lay the learning theoretical framework for classification with information regularization and provide a sample complexity bound. We illustrate the regularization principle in practice by restricting the class of conditional distributions to be logistic regression models and constructing the regularization penalty from a finite set of unlabeled examples.