Hi4D-ADSIP 3-D dynamic facial articulation database
Image and Vision Computing
Hi-index | 0.00 |
Basic research to a virtual face-to-face communication environment between an operator and a machine is presented. In this system, a human natural face appears on the display of machine and can talk to a operator with natural voice and natural face expressions. A face expression synthesis scheme driven by natural voice is presented. Voice includes not only linguistic information but also emotional features. An expression control scheme driven by both features is proposed. A human head with a 3-D wire frame model is expressed. The surface model is generated by texture mapping with 2-D real image. All the motions and expressions are synthesized and controlled automatically by the movement of some feature points on the model.