Modeling context for referring in multimodal dialogue systems

  • Authors:
  • Frédéric Landragin

  • Affiliations:
  • Thales Research and Technology, Orsay Cedex, France

  • Venue:
  • CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The way we see the objects around us determines speech and gestures we use to refer to them. The gestures we produce structure our visual perception. The words we use have an influence on the way we see. In this manner, visual perception, language and gesture present multiple interactions between each other. The problem is global and has to be tackled as a whole in order to understand the complexity of reference phenomena and to deduce a formal model. This model may be useful for any kind of man-machine dialogue system that focuses on deep comprehension. We show how a referring act takes place into a contextual subset of objects, called ‘reference domain,' and we present the ‘multimodal reference domain' model that can be exploited in a dialogue system when interpreting.