Using 3D computer graphics for perception: the role of local and global information in face processing

  • Authors:
  • Adrian Schwaninger;Sandra Schumacher;Heinrich Bülthoff;Christian Wallraven

  • Affiliations:
  • University of Zürich, Switzerland;University of Zürich, Switzerland;Max Planck Institute for Biological Cybernetics, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Tübingen, Germany

  • Venue:
  • Proceedings of the 4th symposium on Applied perception in graphics and visualization
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Everyday life requires us to recognize faces under transient changes in pose, expression and lighting conditions. Despite this, humans are adept at recognizing familiar faces. In this study, we focused on determining the types of information human observers use to recognize faces across variations in viewpoint. Of specific interest was whether holistic information is used exclusively, or whether the local information contained in facial parts (featural or component information), as well as their spatial relationships (configural information) is also encoded. A rigorous study investigating this question has not previously been possible, as the generation of a suitable set of stimuli using standard image manipulation techniques was not feasible. A 3D database of faces that have been processed to extract morphable models (Blanz & Vetter, 1999) allows us to generate such stimuli efficiently and with a high degree of control over display parameters. Three experiments were conducted, modeled after the inter-extra-ortho experiments by Bülthoff & Edelman, 1992. The first experiment served as a baseline for the subsequent two experiments. Ten face-stimuli were presented from a frontal view and from a 45° side view. At test, they had to be recognized among ten distractor faces shown from different viewpoints. We found systematic effects of viewpoint, in that the recognition performance increased as the angle between the learned view and the tested view decreased. This finding is consistent with face processing models based on 2D-view interpolation. Experiments 2 and 3 were the same as Experiment 1 expect for the fact that in the testing phase, the faces were presented scrambled or blurred. Scrambling was used to isolate featural from configural information. Blurring was used to provide stimuli in which local featural information was reduced. The results demonstrated that human observers are capable of recognizing faces across different viewpoints on the sole basis of isolated featural information and of isolated configural information.