Recurrent network with large representational capacity

  • Authors:
  • Drazen Domijan

  • Affiliations:
  • Department of Psychology, Faculty of Philosophy, University of Rijeka, Trg Ivana Klobucarica 1, HR-51000 Rijeka, Croatia

  • Venue:
  • Neural Computation
  • Year:
  • 2004

Quantified Score

Hi-index 0.01

Visualization

Abstract

A recurrent network is proposed with the ability to bind image features into a unified surface representation within a single layer and without capacity limitations or border effects. A group of cells belonging to the same object or surface is labeled with the same activity amplitude, while cells in different groups are kept segregated due to lateral inhibition. Labeling is achieved by activity spreading through local excitatory connections. In order to prevent uncontrolled spreading, a separate network computes the intensity difference between neighboring locations and signals the presence of the surface boundary, which constrains local excitation. The quality of surface representation is not compromised due to the self-excitation.The model is also applied on gray-level images. In order to remove small, noisy regions, a feedforward network is proposed that computes the size of surfaces. Size estimation is based on the difference of dendritic inhibition in lateral excitatory and inhibitory pathways, which allows the network to selectively integrate signals only from cells with the same activity amplitude. When the output of the size estimation network is combined with the recurrent network, good segmentation results are obtained. Both networks are based on biophysically realistic mechanisms such as dendritic inhibition and multiplicative integration among different dendritic branches.