Toward multimodal fusion of affective cues
Proceedings of the 1st ACM international workshop on Human-centered multimedia
Predicting student help-request behavior in an intelligent tutor for reading
UM'03 Proceedings of the 9th international conference on User modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Speech emotion classification and public speaking skill assessment
HBU'10 Proceedings of the First international conference on Human behavior understanding
Recognition of vocal emotions from acoustic profile
Proceedings of the International Conference on Advances in Computing, Communications and Informatics
Hi-index | 0.00 |
This paper concerns the multimodal inference of complex mental states in the intelligent tutoring domain. The research aim is to provide intervention strategies in response to a detected mental state, with the goal being to keep the student in a positive affect realm to maximize learning potential. The research follows an ethnographic approach in the determination of affective states that naturally occur between students and computers. The multimodal inference component will be evaluated from video and audio recordings taken during classroom sessions. Further experiments will be conducted to evaluate the affect component and educational impact of the intelligent tutor.