Using vision, acoustics, and natural language for disambiguation

  • Authors:
  • Benjamin Fransen;Vlad Morariu;Eric Martinson;Samuel Blisard;Matthew Marge;Scott Thomas;Alan Schultz;Dennis Perzanowski

  • Affiliations:
  • Naval Research Laboratory, Washington, DC;Naval Research Laboratory, Washington, DC and University of Maryland, College Park, MD;Naval Research Laboratory, Washington, DC and Georgia Institute of Technology, Atlanta, GA;Naval Research Laboratory, Washington, DC and University of Missouri-Columbia, Columbia, MO;Naval Research Laboratory, Washington, DC and University of Edinburgh, Edinburgh, Scotland;Naval Research Laboratory, Washington, DC and University of Maryland, College Park, MD;Naval Research Laboratory, Washington, DC;Naval Research Laboratory, Washington, DC

  • Venue:
  • Proceedings of the ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Creating a human-robot interface is a daunting experience. Capabilities and functionalities of the interface are dependent on the robustness of many different sensor and input modalities. For example, object recognition poses problems for state-of-the-art vision systems. Speech recognition in noisy environments remains problematic for acoustic systems. Natural language understanding and dialog are often limited to specific domains and baffled by ambiguous or novel utterances. Plans based on domain-specific tasks limit the applicability of dialog managers. The types of sensors used limit spatial knowledge and understanding, and constrain cognitive issues, such as perspective-taking.In this research, we are integrating several modalities, such as vision, audition, and natural language understanding to leverage the existing strengths of each modality and overcome individual weaknesses. We are using visual, acoustic, and linguistic inputs in various combinations to solve such problems as the disambiguation of referents (objects in the environment), localization of human speakers, and determination of the source of utterances and appropriateness of responses when humans and robots interact. For this research, we limit our consideration to the interaction of two humans and one robot in a retrieval scenario. This paper will describe the system and integration of the various modules prior to future testing.