Predicting occupation via human clothing and contexts

  • Authors:
  • Zheng Song; Meng Wang; Xian-sheng Hua;Shuicheng Yan

  • Affiliations:
  • Department of Electrical and Computer Engineering, Singapore;School of Computing, National University of Singapore, Singapore;Microsoft Research Asia, China;Department of Electrical and Computer Engineering, Singapore

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Predicting human occupations in photos has great application potentials in intelligent services and systems. However, using traditional classification methods cannot reliably distinguish different occupations due to the complex relations between occupations and the low-level image features. In this paper, we investigate the human occupation prediction problem by modeling the appearances of human clothing as well as surrounding context. The human clothing, regarding its complex details and variant appearances, is described via part-based modeling on the automatically aligned patches of human body parts. The image patches are represented with semantic-level patterns such as clothes and haircut styles using methods based on sparse coding towards informative and noise-tolerant capacities. This description of human clothing is proved to be more effective than traditional methods. Different kinds of surrounding context are also investigated as a complementarity of human clothing features in the cases that the background information is available. Experiments are conducted on a well labeled image database that contains more than 5; 000 images from 20 representative occupation categories. The preliminary study shows the human occupation is reasonably predictable using the proposed clothing features and possible context.