Exploring a virtual environment by walking in place using the Microsoft Kinect

  • Authors:
  • Ye Zheng;Matthew McCaleb;Courtney Strachan;Betsy Williams

  • Affiliations:
  • Rhodes College;Rhodes College;Rhodes College;Rhodes College

  • Venue:
  • Proceedings of the ACM Symposium on Applied Perception
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

When using a head-mounted display (HMD) to explore a virtual environment (VE), it is useful to navigate on foot. This aids spatial awareness because it provides the inertial cues related to physical locomotion. However, the size of the virtual environment that can be physically explored on foot can be no larger than the limits of the tracking system. One way to permit free exploration of any size virtual environment and provide some of the inertial cues of walking is to have the users "walk in place" (WIP)[Slater et al. 1995; Feasel et al. 2008; Williams et al. 2011]. With WIP, each step is treated as a translation of a distance even though the participant remains in the same location. In our prior work [Williams et al. 2011], we had success in implementing a WIP method using an inexpensive Nintendo Wii Balance Board and we showed that participants' spatial orientation was the same as normal walking and superior to joystick navigation. There were two major drawbacks of our previous WIP algorithm. First, our step detection algorithm had a half-step lag. Second, it was slightly annoying for participants to walk in place on the small board. Thus, the current work seeks to use overcome these limitations by presenting an algorithm to WIP using the Microsoft Kinect sensor. This technology is readily available to the public for around 150 USD.