A context-aware audio presentation method in wearable computing

  • Authors:
  • Shinichi Yataka;Kohei Tanaka;Tsutomu Terada;Masahiko Tsukamoto

  • Affiliations:
  • Kobe Univercity, Nada, Kobe, Hyogo, Japan;Mitsubishi Electric Corporation, Tsukaguchi Honmachi, Amagasaki, Hyogo, Japan;Kobe Univercity, and PRESTO, Nada, Kobe, Hyogo, Japan;Kobe Univercity, Nada, Kobe, Hyogo, Japan

  • Venue:
  • Proceedings of the 2011 ACM Symposium on Applied Computing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Audio usage is one of the more widely-applicable methods of information presentation in wearable computing environments since it can be used hands-free, requires only small devices like earphones, and does not interfere with most tasks compared with other methods including visual displays. However, since presented sound is often drowned out by ambient noise or conversational voice, a user is forced to turn the volume up to catch the audio information. Therefore, we propose an audio information presentation method that takes into account the user contexts. In our proposed method, a system estimates the user contexts based on the data from wearable sensors and a microphone, and then controls and presents the audio information so that it can be surely audible by changing the volume and the timing for presentation. The evaluation results confirmed the effectiveness of our proposed method.