The Art of Computer Programming Volumes 1-3 Boxed Set
The Art of Computer Programming Volumes 1-3 Boxed Set
Modeling drivers' speech under stress
Speech Communication - Special issue on speech and emotion
ICALT '01 Proceedings of the IEEE International Conference on Advanced Learning Technologies
A computational model for the automatic recognition of affect in speech
A computational model for the automatic recognition of affect in speech
Speech Under Stress: Analysis, Modeling and Recognition
Speaker Classification I
SoundSense: scalable sound sensing for people-centric applications on mobile phones
Proceedings of the 7th international conference on Mobile systems, applications, and services
Using Heart Rate Monitors to Detect Mental Stress
BSN '09 Proceedings of the 2009 Sixth International Workshop on Wearable and Implantable Body Sensor Networks
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
EmotionSense: a mobile phones based adaptive platform for experimental social psychology research
Proceedings of the 12th ACM international conference on Ubiquitous computing
Sensor-less sensing for affective computing and stress management technology
Proceedings of the 7th International Conference on Pervasive Computing Technologies for Healthcare
Hi-index | 0.00 |
The human voice encodes a wealth of information about emotion, mood, stress, and mental state. With mobile phones (one of the mostly used modules in body area networks) this information is potentially available to a host of applications and can enable richer, more appropriate, and more satisfying human-computer interaction. In this paper we describe the AMMON (Affective and Mental health MONitor) library, a low footprint C library designed for widely available phones as an enabler of these applications. The library incorporates both core features for emotion recognition (from the Interspeech 2009 Emotion recognition challenge), and the most important features for mental health analysis (glottal timing features). To comfortably run the library on feature phones (the most widely-used class of phones today), we implemented the routines in fixed-point arithmetic, and minimized computational and memory footprint. On identical test data, emotion and stress classification accuracy was indistinguishable from a state-of-the-art reference system running on a PC, achieving 75% accuracy on two-class emotion classification tasks and 84% accuracy on binary classification of stressed and neutral situations. The library uses 30% of real-time on a 1GHz processor during emotion recognition and 70% during stress and mental health analysis.