Image annotation through gaming (TAG4FUN)

  • Authors:
  • L. Seneviratne;E. Izquierdo

  • Affiliations:
  • Multimedia and Vision Research Group, School of Electronic Engineering and Computer Science, Queen Mary, University of London;Multimedia and Vision Research Group, School of Electronic Engineering and Computer Science, Queen Mary, University of London

  • Venue:
  • DSP'09 Proceedings of the 16th international conference on Digital Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a new technique for image annotation in which social aspects of human-based computation are exploited. The proposed approach aims at exploiting what millions of single, online and cooperative gamers are keen to do, (in some cases gaming enthusiasts) to tackle the challenging image annotation task. The proposed approach deviates from the conventional "content-based image retrieval (CBIR)" paradigm, favored by the research community to tackle problems related to semantic annotation and tagging of multimedia content. The proposed approach focuses on social aspects of gaming and the use of humans in a widely distributed fashion through a process of human-based computation. It aims at motivating people towards image tagging while entertaining themselves. Regarding key aspect of label accuracy, a combination of computer vision techniques, machine learning and game strategies have been used.