Crowdsourced Learning to Photograph via Mobile Devices

  • Authors:
  • Wenyuan Yin;Tao Mei;Chang Wen Chen

  • Affiliations:
  • -;-;-

  • Venue:
  • ICME '12 Proceedings of the 2012 IEEE International Conference on Multimedia and Expo
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Capturing a professional photo with high visual quality is always a challenging task for mobile users. This paper presents a crowd sourced learning to photograph approach to assist mobile users for composing high quality photos via their mobile devices. The proposed approach is able to leverage the camera and scene context to search related images with similar context and content from social media communities, and then mine composition knowledge to guide photographing on mobile devices. We develop a patch-based feature generation and selection process to discover salient patches and positions that dominate photo composition aesthetics in the input scene. We then build a regression model to map the composition of salient patches to photo-aesthetic scores. Finally, we develop an efficient hierarchical approach to search for the optimal view enclosure for photograph suggestion. We conducted extensive simulations and subjective evaluations to verify the proposed approach.