Towards generation of fluent referring action in multimodal situations

  • Authors:
  • Tsuneaki Kato;Yukiko I. Nakano

  • Affiliations:
  • NTT Information and Communication Systems Labs., Kanagawa, Japan;NTT Information and Communication Systems Labs., Kanagawa, Japan

  • Venue:
  • ReferringPhenomena '97 Referring Phenomena in a Multimedia Context and their Computational Treatment
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Referring actions in multimodal situations can be thought of as linguistic expressions well coordinated with several physical actions. In this paper, what patterns of linguistic expressions are commonly used and how physical actions are temporally coordinated to them are reported based on corpus examinations. In particular, by categorizing objects according to two features, visibility and membership, the schematic patterns of referring expressions are derived. The difference between the occurrence frequencies of those patterns in a multimodal situation and a spoken-mode situation explains the findings of our previous research. Implementation based on these results is on going.