Generating Self-organized Saliency Map Based on Color and Motion

  • Authors:
  • Satoru Morita

  • Affiliations:
  • Faculty of Engineering, Yamaguchi University,

  • Venue:
  • ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A computational theory concept of generating saliency maps from feature maps generated from the bottom-up approach using various filters such as a Fourier Transform was discussed. We propose a new method which generates a saliency map by using self-organized filters and not by using general filters such as Fourier transform. We extend the ICA base function estimation to the non-regular positioned photoreceptor cells, which receive the hue image, the saturation image, the current intensity image and the previous intensity image, to get the color and motion information. Our model is expanded so that the filter with the receptive field has a non-uniform arrangement like human with foveated vision. An initial vision model such that a photoreceptor receives color and motion is proposed in this paper. We show the effectiveness of our model by applying this model to real images.