Affective speech interface in serious games for supporting therapy of mental disorders

  • Authors:
  • Theodoros Kostoulas;Iosif Mporas;Otilia Kocsis;Todor Ganchev;Nikos Katsaounos;Juan J. Santamaria;Susana Jimenez-Murcia;Fernando Fernandez-Aranda;Nikos Fakotakis

  • Affiliations:
  • Wire Communications Laboratory, Department of Electrical and Computer Engineering, University of Patras, 26500 Rion-Patras, Greece;Wire Communications Laboratory, Department of Electrical and Computer Engineering, University of Patras, 26500 Rion-Patras, Greece;Wire Communications Laboratory, Department of Electrical and Computer Engineering, University of Patras, 26500 Rion-Patras, Greece;Wire Communications Laboratory, Department of Electrical and Computer Engineering, University of Patras, 26500 Rion-Patras, Greece;Wire Communications Laboratory, Department of Electrical and Computer Engineering, University of Patras, 26500 Rion-Patras, Greece;Department of Psychiatry, University Hospital of Bellvitge-IDIBELL and Ciber Fisiopatologia Obesidad y Nutricion (CIBEROBN), 08907 Barcelona, Spain;Department of Psychiatry, University Hospital of Bellvitge-IDIBELL and Ciber Fisiopatologia Obesidad y Nutricion (CIBEROBN), 08907 Barcelona, Spain;Department of Psychiatry, University Hospital of Bellvitge-IDIBELL and Ciber Fisiopatologia Obesidad y Nutricion (CIBEROBN), 08907 Barcelona, Spain;Department of Psychiatry, University Hospital of Bellvitge-IDIBELL and Ciber Fisiopatologia Obesidad y Nutricion (CIBEROBN), 08907 Barcelona, Spain

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2012

Quantified Score

Hi-index 12.05

Visualization

Abstract

We describe a novel design, implementation and evaluation of a speech interface, as part of a platform for the development of serious games. The speech interface consists of the speech recognition component and the emotion recognition from speech component. The speech interface relies on a platform designed and implemented to support the development of serious games, which supports cognitive-based treatment of patients with mental disorders. The implementation of the speech interface is based on the Olympus/RavenClaw framework. This framework has been extended for the needs of the specific serious games and the respective application domain, by integrating new components, such as emotion recognition from speech. The evaluation of the speech interface utilized purposely collected domain-specific dataset. The speech recognition experiments show that emotional speech moderately affects the performance of the speech interface. Furthermore, the emotion detectors demonstrated satisfying performance for the emotion states of interest, Anger and Boredom, and contributed towards successful modelling of the patient's emotion status. The performance achieved for speech recognition and for the detection of the emotional states of interest was satisfactory. Recent evaluation of the serious games showed that the patients started to show new coping styles with negative emotions in normal stress life situations.