Virtual fashion show using real-time markerless motion capture

  • Authors:
  • Ryuzo Okada;Björn Stenger;Tsukasa Ike;Nobuhiro Kondoh

  • Affiliations:
  • Corporate Research & Development Center, Toshiba Corporation;Corporate Research & Development Center, Toshiba Corporation;Corporate Research & Development Center, Toshiba Corporation;Semiconductor Company, Toshiba Corporation

  • Venue:
  • ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a motion capture system using two cameras that is capable of estimating a constrained set of human postures in real time. We first obtain a 3D shape model of a person to be tracked and create a posture dictionary consisting of many posture examples. The posture is estimated by hierarchically matching silhouettes generated by projecting the 3D shape model deformed to have the dictionary poses onto the image plane with the observed silhouette in the current image. Based on this method, we have developed a virtual fashion show system that renders a computer graphics-model moving synchronously to a real fashion model, but wearing different clothes.