Presence and interaction in mixed reality environments

  • Authors:
  • Arjan Egges;George Papagiannakis;Nadia Magnenat-Thalmann

  • Affiliations:
  • Utrecht University, Center for Advanced Gaming and Simulation, Department of Information and Computing Sciences, PO Box 80.089, 3508TB, Utrecht, The Netherlands;University of Geneva, MIRALab, PO Box 80.089, 3508TB, Geneva, Switzerland;University of Geneva, MIRALab, PO Box 80.089, 3508TB, Geneva, Switzerland

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.