Integrating vision and audition within a cognitive architecture to track conversations

  • Authors:
  • J. Gregory Trafton;Magda D. Bugajska;Benjamin R. Fransen;Raj M. Ratwani

  • Affiliations:
  • NRL, Washington, DC, USA;NRL, Washington, DC, USA;NRL, Washington, DC, USA;NRL, Washington, DC, USA

  • Venue:
  • Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a computational cognitive architecture for robots which we call ACT-R/E (ACT-R/Embodied). ACT-R/E is based on ACT-R [1, 2] but uses different visual, auditory, and movement modules. We describe a model that uses ACT-R/E to integrate visual and auditory information to perform conversation tracking in a dynamic environment. We also performed an empirical evaluation study which shows that people see our conversational tracking system as extremely natural.