Multi-Concept Multi-Modality Active Learning for Interactive Video Annotation

  • Authors:
  • Meng Wang;Xian-Sheng Hua;Yan Song;Jinhui Tang;Li-Rong Dai

  • Affiliations:
  • University of Science and Technology of China;Microsoft Research Asia, China;University of Science and Technology of China;University of Science and Technology of China;University of Science and Technology of China

  • Venue:
  • ICSC '07 Proceedings of the International Conference on Semantic Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Active learning methods have been widely applied to reduce human labeling effort in multimedia annotation tasks. However, in traditional methods multiple concepts are usually sequentially annotated, i.e., each concept is exhaustively annotated before proceeding to the next, without taking the learnabilities of different concepts into consideration. Furthermore, in most of these methods only a single modality is applied. This paper presents a novel multiconcept multi-modality active learning method which exchangeably annotates multiple concepts in the context of multi-modality. It iteratively selects a concept and a batch of unlabeled samples, and then these samples are annotated with the selected concept. After that, a graph-based semi-supervised learning is conducted on each modality for the selected concept. The proposed method takes into account both the learnabilities of different concepts and the potentials of different modalities. Experimental results on TRECVID 2005 benchmark have demonstrated its effectiveness and efficiency.