Identification of relevant multimodal cues to enhance context-aware hearing instruments

  • Authors:
  • Bernd Tessendorf;Andreas Kettner;Daniel Roggen;Thomas Stiefmeier;Gerhard Tröster;Peter Derleth;Manuela Feilner

  • Affiliations:
  • Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Phonak AG, Stäfa, Switzerland;Phonak AG, Stäfa, Switzerland

  • Venue:
  • Proceedings of the 6th International Conference on Body Area Networks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Today's state-of-the-art hearing instruments (HIs) adapt the sound processing only according to the user's acoustic surrounding. Acoustic ambiguities limit the set of daily life situations where HIs can support the user adequately. State-of-the-art HIs feature body area networking capabilities. Thus, body-worn sensors could be used to recognize complex user contexts and enhance next-generation HIs. In this work, we identify in a rich real-world data set the mapping between the context of the user --which can be recognized from bodyworn sensors-- and the user's current hearing wish. This is the foundation for the implementation of recognition systems for the specific cues in next generation HIs based on on-body sensor data. We discuss how the identified mapping allows selecting a-priori distributions for hearing wishes and HI parameters like the switching sensitivity. We conclude deducing the sensory requirements to realize next generation of networked HIs.