Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units

  • Authors:
  • Ryotaro Kamimura

  • Affiliations:
  • Information Science Laboratory and Future Science and Technology Joint Research Center, Tokai University, 1117 Kitakaname Hiratsuka Kanagawa 259-1292, Japan. e-mail: ryo@cc.u-tokai.ac.jp

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new information theoretic competitive learning method. We first construct a learning method in single-layered networks, and then we extend it to supervised multi-layered networks. Competitive unit outputs are computed by the inverse of Euclidean distance between input patterns and connection weights. As distance is smaller, competitive unit outputs are stronger. In realizing competition, neither the winner-take-all algorithm nor the lateral inhibition is used. Instead, the new method is based upon mutual information maximization between input patterns and competitive units. In maximizing mutual information, the entropy of competitive units is increased as much as possible. This means that all competitive units must equally be used in our framework. Thus, no under-utilized neurons or dead neurons are generated. When using multi-layered networks, we can improve noise-tolerance performance by unifying information maximization and minimization. We applied our method with single-layered networks to a simple artificial data problem and an actual road classification problem. In both cases, experimental results confirmed that the new method can produce the final solutions almost independently of initial conditions, and classification performance is significantly improved. Then, we used multi-layered networks, and applied them to a character recognition problem and a political data analysis. In these problem, we could show that noise-tolerance performance was improved by decreasing information content on input patterns to certain points.