Multi modal gesture identification for HCI using surface EMG

  • Authors:
  • Ganesh R. Naik;Dinesh K. Kumar;Sridhar P. Arjunan

  • Affiliations:
  • RMIT University, Melbourne, Australia;RMIT University, Melbourne, Australia;RMIT University, Melbourne, Australia

  • Venue:
  • Proceedings of the 12th international conference on Entertainment and media in the ubiquitous era
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gesture and Speech comprise the most important modalities of human interaction. There has been a considerable amount of research attempts at incorporating these modalities for natural HCI. This involves challenge ranging from the low level signal processing of multi-modal input to the high level interpretation of natural speech and gesture in HCI. This paper proposes novel methods to recognize the hand gestures and unvoiced utterances using surface Electromyogram (sEMG) signals originating from different muscles. The focus of this work is to establish a simple, yet robust system that can be integrated to identify subtle complex hand gestures and unvoiced speech commands for control of prosthesis and other computer assisted devices. The proposed multi-modal system is able to identify the hand gestures and silent utterances using Independent Component Analysis (ICA) and Integral RMS (IRMS) of sEMG respectively. Training of the sEMG features was done using a designed ANN architecture and the results reported with overall recognition accuracy of 90.33%.