Efficient large-scale image data set exploration: visual concept network and image summarization

  • Authors:
  • Chunlei Yang;Xiaoyi Feng;Jinye Peng;Jianping Fan

  • Affiliations:
  • School of Electronics and Information, Northwestern Polytechnical University, Xian, P.R.C. and Dept. of Computer Science, UNC-Charlotte, Charlotte, NC;School of Electronics and Information, Northwestern Polytechnical University, Xian, P.R.C.;School of Electronics and Information, Northwestern Polytechnical University, Xian, P.R.C.;School of Electronics and Information, Northwestern Polytechnical University, Xian, P.R.C. and Dept. of Computer Science, UNC-Charlotte, Charlotte, NC

  • Venue:
  • MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

When large-scale online images come into view, it is very important to construct a framework for efficient data exploration. In this paper, we build exploration models based on two considerations: inter-concept visual correlation and intra-concept image summarization. For inter-concept visual correlation, we have developed an automatic algorithm to generate visual concept network which is characterized by the visual correlation between image concept pairs. To incorporate reliable inter-concept correlation contexts, multiple kernels are combined and a kernel canonical correlation analysis algorithm is used to characterize the diverse visual similarity contexts between the image concepts. For intra-concept image summarization, we propose a greedy algorithm to sequentially pick the best representation of the image concept set. The quality score for each candidate summary is computed based on the clustering result, which considers the relevancy, orthogonality and uniformity terms at the same time. Visualization techniques are developed to assist user on assessing the coherence between concept-pairs and investigating the visual properties within the concept. We have conducted experiments and user studies to evaluate both algorithms. We observed very good results and received positive feedback.