Multimedia and hypertext: the Internet and beyond
Multimedia and hypertext: the Internet and beyond
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: recent advances
Affective computing
Perceptual user interfaces: perceptual intelligence
Communications of the ACM
Communications of the ACM
Communications of the ACM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Modeling Multimodal Expression of User's Affective Subjective Experience
User Modeling and User-Adapted Interaction
User Modeling and User-Adapted Interaction
MAUI: a multimodal affective user interface
Proceedings of the tenth ACM international conference on Multimedia
Case-Based Reasoning: Survey and Future Directions
XPS '99 Proceedings of the 5th Biannual German Conference on Knowledge-Based Systems: Knowledge-Based Systems - Survey and Future Directions
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
To feel or not to feel: the role of affect in human-computer interaction
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Multimodal processing by finding common cause
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Guidelines for multimodal user interface design
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Computer vision in the interface
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Challenges in adopting speech recognition
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Has the Internet become indispensable?
Communications of the ACM - Has the Internet become indispensable?
Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10
The importance of affective quality
Communications of the ACM - Special issue: RFID
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
From brows to trust: evaluating embodied conversational agents
From brows to trust: evaluating embodied conversational agents
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
Human-Computer Interaction
An automated face reader for fatigue detection
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
A model based method for automatic facial expression recognition
ECML'05 Proceedings of the 16th European conference on Machine Learning
Human computing and machine understanding of human behavior: a survey
Proceedings of the 8th international conference on Multimodal interfaces
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
An Optical Pen Tracking System as Alternative Pointing Device
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
Human computing and machine understanding of human behavior: a survey
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Gaze-X: adaptive, affective, multimodal interface for single-user office scenarios
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Hi-index | 0.00 |
This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user's actions and emotions are modeled and then used to adapt the HCI and support the user in his or her activity. The proposed system, which we named Gaze-X, is based on sensing and interpretation of the human part of the computer's context, known as W5+ (who, where, what, when, why, how). It integrates a number of natural human communicative modalities including speech, eye gaze direction, face and facial expression, and a number of standard HCI modalities like keystrokes, mouse movements, and active software identification, which, in turn, are fed into processes that provide decision making and adapt the HCI to support the user in his or her activity according to his or her preferences. To attain a system that can be educated, that can improve its knowledge and decision making through experience, we use case-based reasoning as the inference engine of Gaze-X. The utilized case base is a dynamic, incrementally self-organizing event-content-addressable memory that allows fact retrieval and evaluation of encountered events based upon the user preferences and the generalizations formed from prior input. To support concepts of concurrency, modularity/scalability, persistency, and mobility, Gaze-X has been built as an agent-based system where different agents are responsible for different parts of the processing. A usability study conducted in an office scenario with a number of users indicates that Gaze-X is perceived as effective, easy to use, useful, and affectively qualitative.