Sense beauty via face, dressing, and/or voice

  • Authors:
  • Tam V. Nguyen;Si Liu;Bingbing Ni;Jun Tan;Yong Rui;Shuicheng Yan

  • Affiliations:
  • National University of Singapore, Singapore, Singapore;National Laboratory of Pattern Recognition, Beijing, China;Advanced Digital Sciences Center, Singapore, Singapore;National University of Defense Technology, Hunan, China;Microsoft Research Asia, Beijing, China;National University of Singapore, Singapore, Singapore

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Discovering the secret of beauty has been the pursuit of artists and philosophers for centuries. Nowadays, the computational model for beauty estimation has been actively explored in computer science community, yet with the focus mainly on facial features. In this work, we perform a comprehensive study of female attractiveness conveyed by single/multiple modalities of cues, i.e., face, dressing and/or voice, and aim to uncover how different modalities individually and collectively affect the human sense of beauty. To this end, we collect the first Multi-Modality Beauty (M2B) dataset in the world for female attractiveness study, which is thoroughly annotated with attractiveness levels converted from manual k-wise ratings and semantic attributes of different modalities. A novel Dual-supervised Feature-Attribute-Task (DFAT) network is proposed to jointly learn the beauty estimation models of single/multiple modalities as well as the attribute estimation models. The DFAT network differentiates itself by its supervision in both attribute and task layers. Several interesting beauty-sense observations over single/multiple modalities are reported, and the extensive experimental evaluations on the collected M2B dataset well demonstrate the effectiveness of the proposed DFAT network for female attractiveness estimation.