Tracking of facial features to support human-robot interaction

  • Authors:
  • Maria Pateraki;Haris Baltzakis;Polychronis Kondaxakis;Panos Trahanias

  • Affiliations:
  • Institute of Computer Science, Foundation for Research and Technology-Hellas, Crete, Greece;Institute of Computer Science, Foundation for Research and Technology-Hellas, Crete, Greece;Institute of Computer Science, Foundation for Research and Technology-Hellas, Crete, Greece;Institute of Computer Science, Foundation for Research and Technology-Hellas, Crete, Greece

  • Venue:
  • ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a novel methodology for detection and tracking of facial features like eyes, nose and mouth in image sequences. The proposed methodology is intended to support natural interaction with autonomously navigating robots that guide visitors in museums and exhibition centers and, more specifically, to provide input for the analysis of facial expressions that humans utilize while engaged in various conversational states. For face and facial feature region detection and tracking, we propose a methodology that combines appearance-based and feature-based methods for recognition and tracking, respectively. For the stage of face tracking the introduced method is based on Least Squares Matching (LSM), a matching technique able to model effectively radiometric and geometric differences between image patches in different images. Thus, compared with previous research, the LSM approach can overcome the problems of variable scene illumination and head in-plane rotation. Another significant characteristic of the proposed approach is that tracking is performed on the image plane only wherever laser range information suggests so. The increased computational efficiency meets the real time demands of human-robot interaction applications and hence facilitates the development of relevant systems.