Comparing local and sequential models for statistical incremental natural language understanding

  • Authors:
  • Silvan Heintze;Timo Baumann;David Schlangen

  • Affiliations:
  • University of Potsdam, Germany;University of Potsdam, Germany;University of Potsdam, Germany

  • Venue:
  • SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Incremental natural language understanding is the task of assigning semantic representations to successively larger prefixes of utterances. We compare two types of statistical models for this task: a) local models, which predict a single class for an input; and b), sequential models, which align a sequence of classes to a sequence of input tokens. We show that, with some modifications, the first type of model can be improved and made to approximate the output of the second, even though the latter is more informative. We show on two different data sets that both types of model achieve comparable performance (significantly better than a baseline), with the first type requiring simpler training data. Results for the first type of model have been reported in the literature; we show that for our kind of data our more sophisticated variant of the model performs better.