Ultrasound-based movement sensing, gesture-, and context-recognition

  • Authors:
  • Hiroki Watanabe;Tsutomu Terada;Masahiko Tsukamoto

  • Affiliations:
  • Kobe University, Kobe, Hyogo, Japan;Kobe University/Japan Science and Technology Agency, Kobe, Hyogo, Japan;Kobe University, Kobe, Hyogo, Japan

  • Venue:
  • Proceedings of the 2013 International Symposium on Wearable Computers
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the later indicates the speed of motions. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. We evaluate the approach in one scenario on nine gestures/activities with 10 users. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 87% on average. When there was environmental sound generated from other people, we compare approach ultrasound-based recognition which uses only the feature value of ultrasound against standard approach, which uses feature value of ultrasound and environmental sound. Results for the proposed approach are 65%, for the standard approach are 57%.