Implicit visual concept modeling in image / video annotation

  • Authors:
  • Klimis Ntalianis;Anastasios Doulamis;Nicolas Tsapatsoulis

  • Affiliations:
  • Cyprus University of Technology, Limmasol, CT, Cyprus;Technical University of Crete, Chania 73100 , Greece;Cyprus University of Technology, Limmasol, Cyprus

  • Venue:
  • Proceedings of the first ACM international workshop on Analysis and retrieval of tracked events and motion in imagery streams
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper a novel approach for automatically annotating image databases is proposed. Despite most current approaches that are just based on spatial content analysis, the proposed method properly combines implicit feedback information and visual concept models for semantically annotating images. Our method can be easily adopted by any multimedia search engine, providing an intelligent way to even annotate completely non-annotated content. The proposed approach currently provides very interesting results in limited-content environments and it is expected to add significant value to billions of non-annotated images existing in the Web. Furthermore expert annotators can gain important knowledge relevant to user new trends, language idioms and styles of searching.