Learning from partially supervised data using mixture models and belief functions

  • Authors:
  • E. Côme;L. Oukhellou;T. Denux;P. Aknin

  • Affiliations:
  • Institut National de Recherche sur les Transports et leur Sécurité (INRETS) - LTN, 2 av. Malleret-Joinville, 94114 Arcueil Cedex, France and Centre de Recherches de Royallieu, Universit& ...;Institut National de Recherche sur les Transports et leur Sécurité (INRETS) - LTN, 2 av. Malleret-Joinville, 94114 Arcueil Cedex, France and Université Paris XII - CERTES, 61 av. du ...;Centre de Recherches de Royallieu, Université de Technologie de Compiègne - HEUDIASYC, B.P. 20529, 60205 Compiègne Cedex, France;Institut National de Recherche sur les Transports et leur Sécurité (INRETS) - LTN, 2 av. Malleret-Joinville, 94114 Arcueil Cedex, France

  • Venue:
  • Pattern Recognition
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper addresses classification problems in which the class membership of training data are only partially known. Each learning sample is assumed to consist of a feature vector x"i@?X and an imprecise and/or uncertain ''soft'' label m"i defined as a Dempster-Shafer basic belief assignment over the set of classes. This framework thus generalizes many kinds of learning problems including supervised, unsupervised and semi-supervised learning. Here, it is assumed that the feature vectors are generated from a mixture model. Using the generalized Bayesian theorem, an extension of Bayes' theorem in the belief function framework, we derive a criterion generalizing the likelihood function. A variant of the expectation maximization (EM) algorithm, dedicated to the optimization of this criterion is proposed, allowing us to compute estimates of model parameters. Experimental results demonstrate the ability of this approach to exploit partial information about class labels.