A Dual Mode Human-Robot Teleoperation Interface Based on Airflow in the Aural Cavity

  • Authors:
  • Ravi Vaidyanathan;Monique P. Fargues;R. Serdar Kurcan;Lalit Gupta;Srinivas Kota;Roger D. Quinn;Dong Lin

  • Affiliations:
  • University of Southampton, Southampton, UK Naval Postgraduate School, Monterey, CA, USA Case Western Reserve University, OH, USA;Naval Postgraduate School, Monterey, CA, USA;Naval Postgraduate School, Monterey, CA, USA;Southern Illinois University, Carbondale, IL, USA;Southern Illinois University, Carbondale, IL, USA;Case Western Reserve University, OH, USA;Think-A-Move, Ltd, Beachwood, OH, USA

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robot teleoperation systems have been limited in their utility due to the need for operator motion, lack of portability and limitation to singular input modalities. In this article, the design and construction of a dual-mode human—machine interface system for robot teleoperation addressing all these issues is presented. The interface is capable of directing robotic devices in response to tongue movement and/or speech without insertion of any device in the vicinity of the oral cavity. The interface is centered on the unique properties of the human ear as an acoustic output device. Specifically, we present: (1) an analysis of the sensitivity of human ear canals as acoustic output device; (2) the design of a new sensor for monitoring airflow in the aural canal; (3) pattern recognition procedures for recognition of both speech and tongue movement by monitoring aural flow across several human test subjects; and (4) a conceptual design and simulation of the machine interface system. We believe this work will lay the foundation for a new generation of human machine interface systems for all manner of robotic applications.