AnalyzeThis: unobtrusive mental health monitoring by voice

  • Authors:
  • Keng-hao Chang;Matthew K. Chan;John Canny

  • Affiliations:
  • University of California, Berkeley, Berkeley, CA, USA;University of California, Berkeley, Berkeley, BC, USA;University of California, Berkeley, Berkeley, CA, USA

  • Venue:
  • CHI '11 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mental illness is one of the most undertreated health problems worldwide. Previous work has shown that there are remarkably strong cues to mental illness in short samples of the voice. These cues are evident in severe forms of illness, but it would be most valuable to make earlier diagnoses from a richer feature set. Furthermore there is an abstraction gap between these voice cues and the diagnostic cues used by practitioners. We believe that by closing this gap, we can build more effective early diagnostic systems for mental illness. In order to develop improved monitoring, we need to translate the high-level cues used by practitioners into features that can be analyzed using signal processing and machine learning techniques. In this paper we describe the elicitation process that we used to tap the practitioners' knowledge. We borrow from both AI (expert systems) and HCI (contextual inquiry) fields in order to perform this knowledge transfer. The paper highlights an unusual and promising role for HCI - the analysis of interaction data for health diagnosis.