Torso versus gaze direction to navigate a VE by walking in place

  • Authors:
  • Betsy Williams;Matthew McCaleb;Courtney Strachan;Ye Zheng

  • Affiliations:
  • Rhodes College;Rhodes College;Rhodes College;Rhodes College

  • Venue:
  • Proceedings of the ACM Symposium on Applied Perception
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work, we present a simple method of "walking in place" (WIP) using the Microsoft Kinect to explore a virtual environment (VE) with a head-mounted display (HMD). Other studies have shown that WIP to explore a VE is equivalent to normal walking in terms of spatial orientation. This suggests that WIP is a promising way to explore a large VE. The Microsoft Kinect sensor is a great tool for implementing WIP because it enables real time skeletal tracking and is relatively inexpensive (150 USD). However, the skeletal information obtained from Kinect sensors can be noisy. Thus, in this work, we discuss how we combined the data from two Kinects to implement a robust WIP algorithm. As part of our analysis on how best to implement WIP with the Kinect, we compare Gaze direction locomotion to Torso direction locomotion. We report that participants' spatial orientation was better when they translated forward in the VE in the direction they were looking.