Gaze-directed Adaptive Rendering for Interacting with Virtual Space

  • Authors:
  • Toshikazu Ohshima;Hiroyuki Yamamoto;Hideyuki Tamura

  • Affiliations:
  • -;-;-

  • Venue:
  • VRAIS '96 Proceedings of the 1996 Virtual Reality Annual International Symposium (VRAIS 96)
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new method of rendering for interaction with 3D virtual space with the use of gaze detection devices. In this method, hierarchical geometric models of graphic objects are constructed prior to the rendering process. The rendering process first calculates the visual acuity, which represents the importance of a graphic object for a human operator, from the gaze position of the operator. Second, the process selects a level from the set of hierarchical geometric models depending on the value of visual acuity. That is, a simpler level of detail is selected where the visual acuity is lower, and a more complicated level is used where it is higher. Then, the selected graphic models are rendered on the display. This paper examines three visual characteristics to calculate the visual acuity: the central / peripheral vision, the kinetic vision, and the fusional vision. The actual implementation and our testbed system are described, as well as the details of the visual acuity model.