An exposure invariant video retrieval method for eyetap devices

  • Authors:
  • Lekha Chaisorn;Corey Manders

  • Affiliations:
  • Institute for Infocomm Research, Connexis, Singapore;Institute for Infocomm Research, Connexis, Singapore

  • Venue:
  • VRCAI '08 Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the field of mediated reality, significant research has been undertaken on the development and use of Eyetap devices. With such a device, it is possible to continuously record a portion of the image entering one or both eyes. However, when using such a device, storage becomes one problem. Fortunately, this may be addressed both by efficient video coding, as well as the fact that information storage is becoming increasing large, and very inexpensive. A possibly greater problem is that given such a large and ever growing image and video collection, retrieval of images and videos becomes a demanding task. In addition, with growing image and video libraries, detection of modified copies increases in importance. In this paper, we present a framework for retrieving images/videos with an Eyetap device, that had also been captured by Eyetap devices, or by other means. We employ the ordinal-based method to tackle image and video indexing and retrieval with the enhanced similarity measure. Because our Eyetap video collection is still quite limited, we instead tested our system on 50 videos obtained from TRECVID 2006. From this experimental result, it is demonstrated that our system is efficient and robust, and thus appropriate for the Eyetap applications we have specified.