Information-theoretic enhancement learning and its application to visualization of self-organizing maps

  • Authors:
  • Ryotaro Kamimura

  • Affiliations:
  • IT Education Center, Tokai University, 1117 Kitakaname, Hiratsuka, Kanagawa 259-1292, Japan

  • Venue:
  • Neurocomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we propose a new information-theoretic method called ''enhancement learning'' to interpret the configuration of competitive networks. When applied to self-organizing maps, the method aims to make clusters of data easier to see at different detail levels. In enhancement learning, connection weights are actively modified to enhance competitive units for better interpretation, at the expense of quantization errors in the extreme case, because error minimization is not the main target of enhancement learning. After modifying connection weights, enhancement learning can generate as many network configurations as possible just by our changing the enhancement parameter. A useful way to combine the information from the several network configurations is to extract features common to all configurations and specific to some configurations. In addition, we propose relative information, namely, mutual information that takes into consideration the corresponding errors between input patterns and connection weights. The relative information provides a guideline by which we can pay much attention to a particular network configuration among many possibilities. We applied the method to an artificial data problem, the well-known Iris problem, Haberman data and a cancer data problem. In all the problems, experimental results confirmed that, as the enhancement parameter is increased, multiple configurations are generated, in which the number of boundaries in terms of U-matrices and component planes could be increased. In addition, we could see that relative information was effective in suggesting a possibility to detect the appropriate number of clusters.