Functional similarities in spatial representations between real and virtual environments

  • Authors:
  • Betsy Williams;Gayathri Narasimham;Claire Westerman;John Rieser;Bobby Bodenheimer

  • Affiliations:
  • Vanderbilt University, Nashville, TN;Vanderbilt University, Nashville, TN;Vanderbilt University, Nashville, TN;Vanderbilt University, Nashville, TN;Vanderbilt University, Nashville, TN

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents results that demonstrate functional similarities in subjects' access to spatial knowledge (or spatial representation) between real and virtual environments. Such representations are important components of the transfer of reasoning ability and knowledge between these two environments. In particular, we present two experiments aimed at investigating similarities in spatial knowledge derived from exploring on foot both physical environments and virtual environments presented through a head-mounted display. In the first experiment, subjects were asked to learn the locations of target objects in the real or virtual environment and then rotate the perspective by either physically locomoting to a new facing direction or imagining moving. The latencies and errors were generally worse after imagining locomoting and for greater degrees of rotation in perspective; they did not differ significantly across knowledge derived from exploring the physical versus virtual environments. In the second experiment, subjects were asked to imagine simple rotations versus simple translations in perspective. The errors and latencies indicated that the to-be-imagined disparity was linearly related after learning the physical and virtual environment. These results demonstrate functional similarities in access to knowledge of new perspective when it is learned by exploring physical environments and virtual renderings of the same environment.