Constraints on the use of language, gesture and speech for multimodal dialogues

  • Authors:
  • Bertrand Gaiffe;Laurent Romary

  • Affiliations:
  • CRIN-CNRS & INRIA Loraine, Vandœœuvre Lès Nancy;CRIN-CNRS & INRIA Loraine, Vandœœuvre Lès Nancy

  • Venue:
  • ReferringPhenomena '97 Referring Phenomena in a Multimedia Context and their Computational Treatment
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the domain of natural language understanding and more precisely man-machine dialogue design, there are usually two trends of research which seem to be rather differentiated. On the one hand, many studies have tackled the problem of interpreting spatial references expressed in verbal utterances, focusing in particular on the different geometric or functionnal constraints which are bound to the existance of a 'source' (or site) element in relation to which a 'target' is being situated. Such studies are usually based upon fine grained linguistic descriptions for different languages (Vandeloise, 1986). On the other hand, the problem raised by the integration of a gestural mode within classical NL interfaces has yielded some specific research about the association of demonstrative or deictic Nps together with designations, as initited by Bolt some two decades ago (cf. Thorisson et alii, 1992; Bellalem and Romary, 1995). Our aim in this paper is to show that the different phenomena described in the context of spatial reference or multimodal interaction should not necessarily be considered as two independant issues, but should rather be analysed in a unified way to account for the fact that they are both based on linguistic and perceptual data.