A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Two-frame motion estimation based on polynomial expansion
SCIA'03 Proceedings of the 13th Scandinavian conference on Image analysis
Hi-index | 0.00 |
In this study, we developed a human attention model for smooth human-robot interaction. The model consists of the saliency map generation module and manipulation map generation module. The manipulation map describes top-down factors, such as human face, hands and gaze in the input image. To evaluate the proposed model, we applied the model to a magic video, and measured human gaze points during watching the video. Based on the experimental results, the proposed model can better explain human attention than the saliency map.