Actions in stillweb images: visualization, detection and retrieval

  • Authors:
  • Piji Li;Jun Ma;Shuai Gao

  • Affiliations:
  • School of Computer Science & Technology, Shandong University, Jinan, China;School of Computer Science & Technology, Shandong University, Jinan, China;School of Computer Science & Technology, Shandong University, Jinan, China

  • Venue:
  • WAIM'11 Proceedings of the 12th international conference on Web-age information management
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

We describe a framework for human action retrieval in still web images by verb queries, for instance "phoning". Firstly, we build a group of visual discriminative instances for each action class, called "Exemplarlets". Thereafter we employ Multiple Kernel Learning (MKL) to learn an optimal combination of histogram intersection kernels, each of which captures a state-of-the-art feature channel. Our features include the distribution of edges, dense visual words and feature descriptors at different levels of spatial pyramid. For a new image we can detect the hot-region using a sliding-window detector learnt via MKL. The hotregion can imply latent actions in the image. After the hot-region has been detected, we build a inverted index in the visual search path, which we called Visual Inverted Index (VII). Finally, fusing the visual search path and the text search path, we can get the accurate results either relevant to text or to visual information. We show both the detection and retrieval results on our newly collected dataset of six actions as well as demonstrate improved performance over existing methods.