Gaze movement inference for user adapted image annotation and retrieval

  • Authors:
  • S. Navid H. Haji Mirza;Michael Proulx;Ebroul Izquierdo

  • Affiliations:
  • Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom

  • Venue:
  • SBNMA '11 Proceedings of the 2011 ACM workshop on Social and behavioural networked media access
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In media personalisation the media provider needs to receive feedbacks from its users to adapt media contents used for interaction. At the current stage this feedback is limited to mouse clicks and keyboard entries. This report explores the possible solutions to include the gaze movements of a user as a form of feedback for media personalisation and adaptation. Features are extracted from the gaze trajectory of users while they are searching in an image database for a Target Concept(TC). These features are used to measure a user's visual attention to every image appeared on the screen called user interest level(UIL). Because the reaction of different people to the same content are different, for every new user a new adapted processing interface is developed automatically. In average our interface could detect 10% of the images belonging to the TC class with no error and it could identify 40% of them with only 20% error. We show in this paper that the gaze movement is a reliable feedback to be used for measuring one's interest to images which help to personalise image annotation and retrieval.