On cross-language image annotations

  • Authors:
  • Xiaoguang Rui;Nenghai Yu;Mingjing Li;Lei Wu

  • Affiliations:
  • Microsoft Key Laboratory of Multimedia Computing and Communication, Dept. of EEIS, University of Science Technology of China, MOE, Hefei, China;Microsoft Key Laboratory of Multimedia Computing and Communication, Dept. of EEIS, University of Science Technology of China, MOE, Hefei, China;Microsoft Key Laboratory of Multimedia Computing and Communication, Dept. of EEIS, University of Science Technology of China, MOE, Hefei, China;Microsoft Key Laboratory of Multimedia Computing and Communication, Dept. of EEIS, University of Science Technology of China, MOE, Hefei, China

  • Venue:
  • ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic annotation of digital pictures is a key technology for managing and retrieving images from large image collections. Typical algorithms only deal with the problem of monolingual image annotation. In this paper, we propose a framework to deal with the problem of multilingual image annotation, which can annotate images in multiple languages. The framework can not only benefit users with different native languages, but also provide more accurate annotations. In this framework, image annotation is performed in two stages, including parallel monolingual image annotation and the fusion of annotation results in multiple languages. In the first stage, candidate annotations for each language are extracted by leveraging multilingual large scale web image database. Due to the incompleteness and inaccuracy problem of candidate annotations, we proposed a multilingual annotation fusion algorithm (MAF). By modeling candidate annotations for each language as an n-partite graph, MAF algorithm can improve and re-rank multilingual annotations. Finally, annotations with the highest ranking values in each language are selected and translated as the result. Experimental results for English-Chinese image annotations demonstrate the effectiveness of the proposed framework.