A video based personalized face model generation approach for network 3d games

  • Authors:
  • Xiangyong Zeng;Jian Yao;Mandun Zhang;Yangsheng Wang

  • Affiliations:
  • Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China;Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China;Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China;Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China

  • Venue:
  • ICEC'05 Proceedings of the 4th international conference on Entertainment Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have developed a fast generation system for personalized 3D face model and plan to apply it in network 3D games. This system uses one video camera to capture player’s frontal face image for 3D modeling and dose not need calibration and plentiful manual tuning. The 3D face model in games is represented by a 3D geometry mesh and a 2D texture image. The personalized geometry mesh is obtained by deforming an original mesh with the relative positions of the player’s facial features, which are automatically detected from the frontal image. The relevant texture image is also obtained from the same image. In order to save storage space and network bandwidth, only the feature data and texture data from each player are sent to the game server and then to other clients. Finally, players can see their own faces in multiplayer games.