Dialogue act modeling for automatic tagging and recognition of conversational speech

  • Authors:
  • Andreas Stolcke;Noah Coccaro;Rebecca Bates;Paul Taylor;Carol Van Ess-Dykema;Klaus Ries;Elizabeth Shriberg;Daniel Jurafsky;Rachel Martin;Marie Meteer

  • Affiliations:
  • SRI International;University of Colorado at Boulder;University of Washington;University of Edinburgh;U.S. Deptartment of Defense;Carnegie Mellon University and University of Karlsruhe;SRI International;University of Colorado at Boulder;Johns Hopkins University;BBN Technologies

  • Venue:
  • Computational Linguistics
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as STATEMENT, QUESTION, BACKCHANNEL, AGREEMENT, DISAGREEMENT, and APOLOGY. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.