A robust agent-based gesture tracking system

  • Authors:
  • Francis Quek;Robert Bryll

  • Affiliations:
  • -;-

  • Venue:
  • A robust agent-based gesture tracking system
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual analysis of human motion, including areas such as hand gesture and face recognition, whole body tracking and activity recognition, has been a domain of very intensive research in recent years. The main general goal of this research can be stated as improvement of man-machine inter action, but the possible uses go far beyond the area of HCI. Specific applications of vision-based analysis of human motion include advanced user interfaces (e.g. gesture driven control), motion analysis in sports and medicine (e.g. content-based indexing of video footage, clinical, studies of orthopedic patients), psycholinguistic research, smart surveillance systems, virtual reality and entertainment (e.g. games, character animation, special effects in movies) and very low bit-rate video compression. The two additional applications that are being studied in our research are improving the speech recognition algorithms by incorporating gesture information and vision-based assessment of effectiveness of a therapy used in Parkinson disease patients on their motor performance. This thesis describes a novel agent-based gesture tracking system (called AgenTrac) that I developed in Vision Interfaces and Systems Laboratory. The system is one of the key elements of a broader NSF-funded research project, spanning multiple institutions and performed in collaboration with psycholinguists and speech recognition researchers. Multimodal human interaction in conversational environments is the focus of this project. My agent-based approach to the visual tracking of human hands and head represents a very useful “middle ground” between the simple model-free tracking of human body parts and sophisticated model-based solutions. It combines the simplicity, speed and flexibility of tracking without using explicit shape models with the ability to utilize domain knowledge and to apply various constraints characteristic of more complex model-based tracking approaches. (Abstract shortened by UMI.)