Examining Virtual Busts: Are Photogrammetrically Generated Head Models Effective for Person Identification?

  • Authors:
  • Jeremy N. Bailenson;Andrew C. Beall;Jim Blascovich;Christopher Rex

  • Affiliations:
  • Virtual Human Interaction Lab, Department of Communication, Stanford University, Stanford, CA 94305-2050, bailenson@stanford.edu;Research Center for Virtual, Environments and Behavior, Department of Psychology, University of California at Santa Barbara, Santa Barbara, CA 93106, beall@psych.ucsb.edu;Research Center for Virtual, Environments and Behavior, Department of Psychology, University of California at Santa Barbara, Santa Barbara, CA 93106, blascovich@psych.ucsb.edu;Research Center for Virtual, Environments and Behavior, Department of Psychology, University of California at Santa Barbara, Santa Barbara, CA 93106, crex@uic.edu

  • Venue:
  • Presence: Teleoperators and Virtual Environments
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We examined the effectiveness of using 3D, visual, digital representations of human heads and faces (i.e., virtual busts) for person identification. In a series of 11 studies, participants learned a number of human faces from analog photographs. We then crafted virtual busts from those analog photographs, and compared recognition of photographs of the virtual busts to the original analog photographs. We demonstrated that the accuracy of person identification using photographs of virtual busts is high in an absolute sense, but not as high as using the original analog photographs. We present a paradigm for comparing the similarity, both structural (objectively similar in shape) and subjective (subjectively in the eyes of a viewer) of virtual busts to analog photographs, with the goal of beginning the discussion of a uniform standard for assessing the fidelity of digital models of human faces.