Self-enhancement learning: self-supervised and target-creating learning

  • Authors:
  • Ryotaro Kamimura

  • Affiliations:
  • IT Education Center, Tokai University, Japan

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new learning method called "self-enhancement learning." In this model, a network enhances its state by itself, and this enhanced state is to be imitated by another state of the network. The word "target" in our model means that a target is created spontaneously by a network, which must try to attain the target. Enhancement is realized by changing the Gaussian width or enhancement parameter. With different enhancement parameters, we can set up the different states of a network. In particular, we set up an enhanced and a relaxed state, and the relaxed state tries to imitate the enhanced state as much as possible. To demonstrate the effectiveness of this method, we apply the self-enhancement learning to the SOMe For this purpose, we introduce collectiveness into an enhanced state in which all neurons collectively respond to input patterns. Then, this enhanced and collective state should be imitated by the other non-enhanced and relaxed state. We applied the method to the Iris problem. Experimental results showed that the V-matrices obtained were significantly similar to those produced by the conventional SOMe However, much better performance could be obtained in terms of quantitative and topological errors. The experimental results suggest the possibility for the self-enhancement learning to be applied to many different neural network models.