A novel robotic visual perception method using object-based attention

  • Authors:
  • Yuanlong Yu;George K. I. Mann;Raymond G. Gosine

  • Affiliations:
  • -;-;-

  • Venue:
  • ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The object-based attention theory has shown that perception processes only select relevant objects of the world which are then represented for action. Thus this paper proposes a novel computational method of robotic visual perception based on the object-based attention mechanism. It involves three modules: pre-attentive processing, attentional selection and perception learning. Visual scene is firstly segmented into discrete proto-objects pre-attentively and the gist of scene is identified as well. The attentional selection module simulates two types of modulation: bottom-up competition and top-down biasing. Bottom-up competition is evaluated by center-surround contrast; Given the task or scene category, the task-relevant object and a task-relevant feature of it is determined based on perception control rules and then used to evaluate topdown biasing. Following attentional selection, the attended object is put into perception learning module to update the existing object representations and perception control rules in long-term memory. An object representation consisting of between-object and within-object codings is built using probabilistic neural networks. An association memory using Bayesian network is also built to model perception control rules. Two types of robotic tasks are used to test this proposed model: task-specific object detection and landmark detection.