Parametric Embedding for Class Visualization

  • Authors:
  • Tomoharu Iwata;Kazumi Saito;Naonori Ueda;Sean Stromsten;Thomas L. Griffiths;Joshua B. Tenenbaum

  • Affiliations:
  • iwata@cslab.kecl.ntt.co.jp;saito@cslab.kecl.ntt.co.jp;NTT Communication Science Laboratories, Kyoto 619-0237, Japan ueda@cslab.kecl.ntt.co.jp;BAE Systems Advanced Information Technologies, Burlington, MA 01803, U.S.A. sean.stromsten@baesystems.com;Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, U.S.A. tom_griffiths@brown.edu;Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02142, U.S.A. jbt@mit.edu

  • Venue:
  • Neural Computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.