Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Affective computing
Example-Based Learning for View-Based Human Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
MAUI: a multimodal affective user interface
Proceedings of the tenth ACM international conference on Multimedia
A Virtual Mirror Interface Using Real-Time Robust Face Tracking
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Emotion Recognition Based on Joint Visual and Audio Cues
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Audio-visual emotion recognition in adult attachment interview
Proceedings of the 8th international conference on Multimodal interfaces
Automatic discrimination between laughter and speech
Speech Communication
Ambient Intelligence: A Multimedia Perspective
IEEE MultiMedia
Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions
FAC'07 Proceedings of the 3rd international conference on Foundations of augmented cognition
Gaze-X: adaptive, affective, multimodal interface for single-user office scenarios
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
A model based method for automatic facial expression recognition
ECML'05 Proceedings of the 16th European conference on Machine Learning
Exploring social and temporal dimensions of emotion induction using an adaptive affective mirror
CHI '09 Extended Abstracts on Human Factors in Computing Systems
ACM Transactions on Accessible Computing (TACCESS)
Computing and evaluating the body laughter index
HBU'12 Proceedings of the Third international conference on Human Behavior Understanding
Hi-index | 0.01 |
In this paper, we present a multimodal affective mirror that senses and elicits laughter. Currently, the mirror contains a vocal and a facial affect-sensing module, a component that fuses the output of these two modules to achieve a user-state assessment, a user state transition model, and a component to present audiovisual affective feedback that should keep or bring the user in the intended state. Interaction with this intelligent interface involves a full cyclic process of sensing, interpreting, reacting, sensing (of the reaction effects), interpreting & The intention of the mirror is to evoke positive emotions, to make people laugh and to increase the laughter. The first user experiences tests showed that users show cooperative behavior, resulting in mutual user-mirror action-reaction cycles. Most users enjoyed the interaction with the mirror and immersed in an excellent user experience.