Learning in the multiple class random neural network

  • Authors:
  • E. Gelenbe;K. F. Hussain

  • Affiliations:
  • Sch. of Electr. Eng. & Comput. Sci., Central Florida Univ., Orlando, FL;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Spiked recurrent neural networks with "multiple classes" of signals have been recently introduced by Gelenbe and Fourneau (1999), as an extension of the recurrent spiked random neural network introduced by Gelenbe (1989). These new networks can represent interconnected neurons, which simultaneously process multiple streams of data such as the color information of images, or networks which simultaneously process streams of data from multiple sensors. This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs). It is based on gradient descent optimization of a cost function. The algorithm exploits the analytical properties of the MCRNN and requires the solution of a system of nC linear and nC nonlinear equations (where C is the number of signal classes and n is the number of neurons) each time the network learns a new input-output pair. Thus, the algorithm is of O([nC]3) complexity for the recurrent case, and O([nC]2) for a feedforward MCRNN. Finally, we apply this learning algorithm to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. The same trained recurrent network is then used to generate a synthetic texture that imitates the original. This approach is illustrated with various synthetic and natural textures.