Type 1 and 2 mixtures of Kullback-Leibler divergences as cost functions in dimensionality reduction based on similarity preservation

  • Authors:
  • John A. Lee;Emilie Renard;Guillaume Bernard;Pierre Dupont;Michel Verleysen

  • Affiliations:
  • Université catholique de Louvain, Molecular Imaging, Radiotherapy, and Oncology-IREC, Avenue Hippocrate 55, B-1200 Bruxelles, Belgium;Université catholique de Louvain, Machine Learning Group-ICTEAM, Place du Levant 3, B-1348 Louvain-la-Neuve, Belgium;Université catholique de Louvain, Molecular Imaging, Radiotherapy, and Oncology-IREC, Avenue Hippocrate 55, B-1200 Bruxelles, Belgium;Université catholique de Louvain, Machine Learning Group-ICTEAM, Place du Levant 3, B-1348 Louvain-la-Neuve, Belgium;Université catholique de Louvain, Machine Learning Group-ICTEAM, Place du Levant 3, B-1348 Louvain-la-Neuve, Belgium

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

Stochastic neighbor embedding (SNE) and its variants are methods of dimensionality reduction (DR) that involve normalized softmax similarities derived from pairwise distances. These methods try to reproduce in the low-dimensional embedding space the similarities observed in the high-dimensional data space. Their outstanding experimental results, compared to previous state-of-the-art methods, originate from their capability to foil the curse of dimensionality. Previous work has shown that this immunity stems partly from a property of shift invariance that allows appropriately normalized softmax similarities to mitigate the phenomenon of norm concentration. This paper investigates a complementary aspect, namely, the cost function that quantifies the mismatch between similarities computed in the high- and low-dimensional spaces. Stochastic neighbor embedding and its variant t-SNE rely on a single Kullback-Leibler divergence, whereas a weighted mixture of two dual KL divergences is used in neighborhood retrieval and visualization (NeRV). We propose in this paper a different mixture of KL divergences, which is a scaled version of the generalized Jensen-Shannon divergence. We show experimentally that this divergence produces embeddings that better preserve small K-ary neighborhoods, as compared to both the single KL divergence used in SNE and t-SNE and the mixture used in NeRV. These results allow us to conclude that future improvements in similarity-based DR will likely emerge from better definitions of the cost function.