User-Centered Modeling for Spoken Language and Multimodal Interfaces

  • Authors:
  • Sharon Oviatt

  • Affiliations:
  • -

  • Venue:
  • IEEE MultiMedia
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

By modeling difficult sources of linguistic variability in spontaneous speech and language, interfaces can be designed that transparently guide human input to match system processing capabilities. Such work is yielding more user-centered and robust interfaces for next-generation spoken language and multimodal systems.Readers may contact Oviatt at the Center for Human-Computer Communication, Dept. of Computer Science and Engineering, Oregon Graduate Institute of Science and Technology, PO Box 91000, Portland, Or., 97291, e-mail oviatt@cse.ogi.edu.