Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Usability Engineering
Connected Vibrations: A Modal Analysis Approach for Non-Rigid Motion Tracking
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Designing, playing, and performing with a vision-based mouth interface
NIME '03 Proceedings of the 2003 conference on New interfaces for musical expression
A novel face-tracking mouth controller and its application to interacting with bioacoustic models
NIME '04 Proceedings of the 2004 conference on New interfaces for musical expression
Learning advanced skills on new instruments
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Sonification of facial actions for musical expression
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
The 'E' in NIME: musical expression with new computer interfaces
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Oculog: playing with eye movements
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Expression and its discontents: toward an ecology of musical creation
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Ashitaka: an audiovisual instrument
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
EyeMusic: performing live music and multimedia compositions with eye movements
NIME '07 Proceedings of the 7th international conference on New interfaces for musical expression
Facial expression recognition as a creative interface
Proceedings of the 13th international conference on Intelligent user interfaces
Bilinear deep learning for image classification
MM '11 Proceedings of the 19th ACM international conference on Multimedia
WeCard: a multimodal solution for making personalized electronic greeting cards
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
We present a novel visual creativity tool that automatically recognizes facial expressions and tracks facial muscle movements in real time to produce sounds. The facial expression recognition module detects and tracks a face and outputs a feature vector of motions of specific locations in the face. The feature vector is used as input to a Bayesian network which classifies facial expressions into several categories (e.g., angry, disgusted, happy, etc.). The classification results are used along with the feature vector to generate a combination of sounds that change in real time depending on the person's facial expressions. We explain the artistic motivation behind the work, the basic components of our tool, and possible applications in the arts (performance, installation) and in the medical domain. Finally, we report on the experience of approximately 25 users of our system at a conference demonstration session, of 9 participants in a pilot study to assess the system's usability, and discuss our experience installing the work at an important digital arts festival (RE-NEW 2009).