Learning visual saliency based on object's relative relationship

  • Authors:
  • Senlin Wang;Qi Zhao;Mingli Song;Jiajun Bu;Chun Chen;Dacheng Tao

  • Affiliations:
  • Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China;Department of Electrical and Computer Engineering, NUS, Singapore;Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China;Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China;Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China;Centre for Quantum Computation and Information Systems, UTS, Australia

  • Venue:
  • ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part V
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

As a challenging issue in both computer vision and psychological research, visual attention has arouse a wide range of discussions and studies in recent years. However, conventional computational models mainly focus on low-level information, while high-level information and their interrelationship are ignored. In this paper, we stress the issue of relative relationship between high-level information, and a saliency model based on low-level and high-level analysis is also proposed. Firstly, more than 50 categories of objects are selected from nearly 800 images in MIT data set[1], and concrete quantitative relationship is learned based on detail analysis and computation. Secondly, using the least square regression with constraints method, we demonstrate an optimal saliency model to produce saliency maps. Experimental results indicate that our model outperforms several state-of-art methods and produces better matching to human eye-tracking data.