Generating visual concept network from large-scale weakly-tagged images

  • Authors:
  • Chunlei Yang;Hangzai Luo;Jianping Fan

  • Affiliations:
  • Department of Computer Science, UNC-Charlotte, Charlotte, NC;Software Engineering Institute, East China Normal University, Shanghai, China;Department of Computer Science, UNC-Charlotte, Charlotte, NC

  • Venue:
  • MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

When large-scale online images come into view, it is very attractive to incorporate visual concept network for image summarization, organization and exploration. In this paper, we have developed an automatic algorithm for visual concept network generation by determining the diverse visual similarity contexts between the image concepts. To learn more reliable inter-concept visual similarity contexts, the images with diverse visual properties are crawled from multiple sources and multiple kernels are combined to characterize the diverse visual similarity contexts between the images and handle the issue of sparse image distribution more effectively in the high-dimensional multi-modal feature space. Kernel canonical correlation analysis (KCCA) is used to characterize the diverse inter-concept visual similarity contexts more accurately, so that our visual concept network can have better coherence with human perception. A similarity-preserving visual concept network visualization technique is developed to assist users on assessing the coherence between their perceptions and the inter-concept visual similarity contexts determined by our algorithm. Our experimental results on large-scale image collections have observed very good results.