A framework for moderate vocabulary semantic visual concept detection

  • Authors:
  • M. R. Naphade;Ching-Yung Lin;A. Natsev;B. L. Tseng;J. R. Smith

  • Affiliations:
  • Pervasive Media Manage. Group, IBM Thomas J. Watson Res. Center, Hawthorne, NY, USA;Pervasive Media Manage. Group, IBM Thomas J. Watson Res. Center, Hawthorne, NY, USA;Pervasive Media Manage. Group, IBM Thomas J. Watson Res. Center, Hawthorne, NY, USA;Pervasive Media Manage. Group, IBM Thomas J. Watson Res. Center, Hawthorne, NY, USA;Pervasive Media Manage. Group, IBM Thomas J. Watson Res. Center, Hawthorne, NY, USA

  • Venue:
  • ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Extraction of semantic features from visual concepts is essential for meaningful content management in terms of filtering, searching and retrieval. Recently, machine learning techniques have been shown to provide a computational framework to map low level features to high level semantics. In this paper we expose these techniques to the challenge of supporting a moderately large lexicon of semantic concepts. Using the TREC 2002 benchmark corpus for training and validation we investigate a support vector machine based learning system for modeling 34 visual concepts. The detection results show excellent performance for a set of concepts with moderately large training samples. Promising performance is also observed for concepts with few training concepts.