Immersive multiplayer tennis with microsoft kinect and body sensor networks

  • Authors:
  • Suraj Raghuraman;Karthik Venkatraman;Zhanyu Wang;Jian Wu;Jacob Clements;Reza Lotfian;Balakrishnan Prabhakaran;Xiaohu Guo;Roozbeh Jafari;Klara Nahrstedt

  • Affiliations:
  • University of Texas at Dallas, Richardson, TX, USA;University of Texas at Dallas, Richardson, TX, USA;University of Texas at Dallas, Richardson, USA;University of Texas at Dallas, Richardson, USA;University of Texas at Dallas, Richardson, TX, USA;University of Texas at Dallas, Richardson, TX, USA;University of Texas at Dallas, Richardson, TX, USA;University of Texas at Dallas, Richardson, TX, USA;University of Texas at Dallas, Richardson, TX, USA;University of Illinois at Urbana-Champaign, Urbana-Champaign, IL, USA

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an immersive gaming demonstration using the minimum amount of wearable sensors. The game demonstrated is two-player tennis. We combine a virtual environment with real 3D representations of physical objects like the players and the tennis racquet (if available). The main objective of the game is to provide as real an experience of tennis as possible, while also being as less intrusive as possible. The game is played across a network, and this opens the possibility of two remote players playing a game together on a single virtual tennis pitch. The Microsoft Kinect sensors are used to obtain a 3D point cloud and a skeletal map representation of the player. This 3D point cloud is mapped on to the virtual tennis pitch. We also use a wireless wearable Attitude and Heading Reference System (AHRS) mote, which is strapped onto the wrist of the players. This mote gives us precise information about the movement (swing, rotation etc.) of the playing arm. This information along with the skeletal map is used to implement the physics of the game. Using this game we demonstrate our solutions for simultaneous data acquisition, 3D point-cloud mapping in a virtual space, use of the Kinect and AHRS sensors to calibrate real and virtual objects and for interaction of virtual objects with a 3D point cloud.