Familiarity based unified visual attention model for fast and robust object recognition

  • Authors:
  • Seungjin Lee;Kwanho Kim;Joo-Young Kim;Minsu Kim;Hoi-Jun Yoo

  • Affiliations:
  • Division of Electrical Engineering, School of Electrical Engineering & Computer Science, KAIST, 335 Gwahangno, Yuseong-gu, Daejeon 305-701, Republic of Korea;Division of Electrical Engineering, School of Electrical Engineering & Computer Science, KAIST, 335 Gwahangno, Yuseong-gu, Daejeon 305-701, Republic of Korea;Division of Electrical Engineering, School of Electrical Engineering & Computer Science, KAIST, 335 Gwahangno, Yuseong-gu, Daejeon 305-701, Republic of Korea;Division of Electrical Engineering, School of Electrical Engineering & Computer Science, KAIST, 335 Gwahangno, Yuseong-gu, Daejeon 305-701, Republic of Korea;Division of Electrical Engineering, School of Electrical Engineering & Computer Science, KAIST, 335 Gwahangno, Yuseong-gu, Daejeon 305-701, Republic of Korea

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Even though visual attention models using bottom-up saliency can speed up object recognition by predicting object locations, in the presence of multiple salient objects, saliency alone cannot discern target objects from the clutter in a scene. Using a metric named familiarity, we propose a top-down method for guiding attention towards target objects, in addition to bottom-up saliency. To demonstrate the effectiveness of familiarity, the unified visual attention model (UVAM) which combines top-down familiarity and bottom-up saliency is applied to SIFT based object recognition. The UVAM is tested on 3600 artificially generated images containing COIL-100 objects with varying amounts of clutter, and on 126 images of real scenes. The recognition times are reduced by 2.7x and 2x, respectively, with no reduction in recognition accuracy, demonstrating the effectiveness and robustness of the familiarity based UVAM.