Modeling of top-down influences on object-based visual attention for robots

  • Authors:
  • Yuanlong Yu;George K. I. Mann;Raymond G. Gosine

  • Affiliations:
  • Faculty of Engineering, Memorial University of Newfoundland, St. John's, NF, Canada;Faculty of Engineering, Memorial University of Newfoundland, St. John's, NF, Canada;Faculty of Engineering, Memorial University of Newfoundland, St. John's, NF, Canada

  • Venue:
  • ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The selectivity of visual attention mechanism is influenced by bottom-up competition and top-down biasing. This paper presents an object-based visual attention model which simulates top-down influences. Five components of top-down influences are modeled: learning of object representations stored in long-term memory (LTM), deduction of task-relevant feature(s), estimation of top-down biases, mediation between bottom-up and top-down fashions, and object completion processing. This model has been applied into the robotic task of object detection. Experimental results in natural and cluttered scenes are shown to validate this model.