iLike: integrating visual and textual features for vertical search

  • Authors:
  • Yuxin Chen;Nenghai Yu;Bo Luo;Xue-wen Chen

  • Affiliations:
  • University of Kansas, Lawrence, KS, USA;University of Science and Technology of China, Hefei, China;University of Kansas, Lawrence, KS, USA;University of Kansas, Lawrence, KS, USA

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Content-based image search on the Internet is a challenging problem, mostly due to the semantic gap between low-level visual features and high-level content, as well as the excessive computation brought by huge amount of images and high dimensional features. In this paper, we present iLike, a new approach to truly combine textual features from web pages, and visual features from image content for better image search in a vertical search engine. We tackle the first problem by trying to capture the meaning of each text term in the visual feature space, and re-weight visual features according to their significance to the query content. Our experimental results in product search for apparels and accessories demonstrate the effectiveness of iLike and its capability of bridging semantic gaps between visual features and abstract concepts.