A belief-based sequential fusion approach for fusing manual signs and non-manual signals

  • Authors:
  • Oya Aran;Thomas Burger;Alice Caplier;Lale Akarun

  • Affiliations:
  • Department of Computer Engineering, Bogaziçi University, 34342 Istanbul, Turkey;France Telecom R&D, 28 ch. Vieux Chêêne, 38240 Meylan, France;GIPSA-lab, 46 avenue Félix Viallet, 38031 Grenoble cedex 1, France;Department of Computer Engineering, Bogaziçi University, 34342 Istanbul, Turkey

  • Venue:
  • Pattern Recognition
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

Most of the research on sign language recognition concentrates on recognizing only manual signs (hand gestures and shapes), discarding a very important component: the non-manual signals (facial expressions and head/shoulder motion). We address the recognition of signs with both manual and non-manual components using a sequential belief-based fusion technique. The manual components, which carry information of primary importance, are utilized in the first stage. The second stage, which makes use of non-manual components, is only employed if there is hesitation in the decision of the first stage. We employ belief formalism both to model the hesitation and to determine the sign clusters within which the discrimination takes place in the second stage. We have implemented this technique in a sign tutor application. Our results on the eNTERFACE'06 ASL database show an improvement over the baseline system which uses parallel or feature fusion of manual and non-manual features: we achieve an accuracy of 81.6%.