StressSense: detecting stress in unconstrained acoustic environments using smartphones

  • Authors:
  • Hong Lu;Denise Frauendorfer;Mashfiqui Rabbi;Marianne Schmid Mast;Gokul T. Chittaranjan;Andrew T. Campbell;Daniel Gatica-Perez;Tanzeem Choudhury

  • Affiliations:
  • Intel Lab;University of Neuchåtel;Cornell University;University of Neuchåtel;EPFL;Dartmouth College;Idiap and EPFL;Cornell University

  • Venue:
  • Proceedings of the 2012 ACM Conference on Ubiquitous Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stress can have long term adverse effects on individuals' physical and mental well-being. Changes in the speech production process is one of many physiological changes that happen during stress. Microphones, embedded in mobile phones and carried ubiquitously by people, provide the opportunity to continuously and non-invasively monitor stress in real-life situations. We propose StressSense for unobtrusively recognizing stress from human voice using smartphones. We investigate methods for adapting a one-size-fits-all stress model to individual speakers and scenarios. We demonstrate that the StressSense classifier can robustly identify stress across multiple individuals in diverse acoustic environments: using model adaptation StressSense achieves 81% and 76% accuracy for indoor and outdoor environments, respectively. We show that StressSense can be implemented on commodity Android phones and run in real-time. To the best of our knowledge, StressSense represents the first system to consider voice based stress detection and model adaptation in diverse real-life conversational situations using smartphones.