Forced information and information loss in information-theoretic competitive learning

  • Authors:
  • Ryotaro Kamimura

  • Affiliations:
  • Information Science Laboratory, Information Technology Center, Kanagawa, Japan

  • Venue:
  • AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new type of computational method to accelerate a process of information maximization and a new technique to extract important features in input patterns by a concept of information loss. Information-theoretic competitive learning has been proposed to solve the fundamental problems of competitive learning such as the dead neuron problem with many practical applications. However, one of the major problems in information-theoretic competitive learning is slow in increasing information in competitive units, depending upon given problems. To overcome this shortcoming, we propose a new computational method in which maximum information is supposed to be already achieved before learning. By this computational method, we force networks to converge much faster. In addition, information loss is proposed in which difference in formation between an original network and network without an input unit is measured. If the information loss for the unit is large, the input unit should a very important role. By forced information with the information loss, information-theoretic competitive learning is expected to be applied to large-scale practical problems.