Multimodal and mobile conversational Health and Fitness Companions

  • Authors:
  • Markku Turunen;Jaakko Hakulinen;Olov Ståhl;Björn Gambäck;Preben Hansen;Mari C. Rodríguez Gancedo;Raúl Santos de la Cámara;Cameron Smith;Daniel Charlton;Marc Cavazza

  • Affiliations:
  • Department of Computer Sciences, University of Tampere, Kanslerinrinne 1, 33014 Tampere, Finland;Department of Computer Sciences, University of Tampere, Kanslerinrinne 1, 33014 Tampere, Finland;SICS, Swedish Institute for Computer Science AB, Box 1263, 164 29 Kista, Sweden;SICS, Swedish Institute for Computer Science AB, Box 1263, 164 29 Kista, Sweden and Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Sælands ...;SICS, Swedish Institute for Computer Science AB, Box 1263, 164 29 Kista, Sweden;Telefonica I+D, C/Emilio Vargas 6, 28043 Madrid, Spain;Telefonica I+D, C/Emilio Vargas 6, 28043 Madrid, Spain;School of Computing, University of Teesside, Middlesbrough TS1 3BA, United Kingdom;School of Computing, University of Teesside, Middlesbrough TS1 3BA, United Kingdom;School of Computing, University of Teesside, Middlesbrough TS1 3BA, United Kingdom

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. This paper describes how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. We present concrete system architectures, virtual, physical and mobile multimodal interfaces, and interaction management techniques for such Companions. In particular how knowledge representation and separation of low-level interaction modelling from high-level reasoning at the domain level makes it possible to implement distributed, but still coherent, interaction with Companions. The distribution is enabled by using a dialogue plan to communicate information from domain level planner to dialogue management and from there to a separate mobile interface. The model enables each part of the system to handle the same information from its own perspective without containing overlapping logic, and makes it possible to separate task-specific and conversational dialogue management from each other. In addition to technical descriptions, results from the first evaluations of the Companions interfaces are presented.