Learning and parameterization of recurrent neural network arrays for brain models and practical applications

  • Authors:
  • Robert Kozma;Roman Ilin

  • Affiliations:
  • Memphis State University;Memphis State University

  • Venue:
  • Learning and parameterization of recurrent neural network arrays for brain models and practical applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The field of Artificial Neural Networks is inspired by the incredible abilities of human beings and animals to learn and adapt their actions to the environment. The feed-forward architectures although very powerful for statistical pattern recognition are nevertheless not able to solve challenging problems posed by the fields of reinforcement learning and image processing. At the same time, there is growing evidence that the brain is a highly recurrent structure with dynamics at the edge of stability. This work addresses the issues of parameterizations and learning in two novel types of Artificial neural networks: K sets and Cellular Simultaneous Recurrent Networks. Their commonality is in the cellular structure and recurrent connections. The differences are in the internal dynamics and in the learning paradigms employed. Both network types address the problem of intelligent control and at the same time can be used in other applications. The studies of K models focused on their stability properties, the mechanisms employed in maintaining their non-equilibrium dynamics, and the integration of information in large K models. The studies of Cellular Simultaneous Recurrent Networks focus of the issues of learning efficiency.