Semantic information processing of spoken language: how may I help you?

  • Authors:
  • Allen Gorin

  • Affiliations:
  • AT&T Laboratories, Florham Park, NJ

  • Venue:
  • Proceedings of the 8th international conference on Intelligent user interfaces
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The next generation of voice-based user interface technology will enable easy-to-use automation of new and existing communication services, achieving a more natural human-machine interaction. By natural, we mean that the machine understands what people actually say, in contrast to what a system designer expects them to say. This approach is in contrast with menu-driven or strongly-prompted systems, where many users are unable or unwilling to navigate such highly structured interactions. AT&Ts How May I Help You? (HMIHY)(sm) technology shifts the burden from human to machine wherein the system adapts to peoples language, as contrasted with forcing users to learn the machins jargon. We have developed algorithms which learn to extract meaning from fluent speech via automatic acquisition and exploitation of salient words, phrases and grammar fragments from a corpus. In this talk I will describe the speech, language and dialog technology underlying HMIHY, plus experimental evaluation on live customer traffic from AT&T's national deployment for customer careAllen Gorin is the Head of the Speech Interface Research Department at AT&T Laboratories, with long-term research interests focusing on machine learning methods for spoken language understanding. In recent years, he has led a research team in applying speech, language and dialog technology to AT&Ts "How May I Help You?" (HMIHY) (sm) service, which has been deployed nationally for long distance customer care. He was awarded the 2002 AT&T Science and Technology Medal for his research contributions to spoken language understanding for HMIHYHe received the B.S. and M.A. degrees in Mathematics from SUNY at Stony Brook, and the Ph.D. in Mathematics from the CUNY Graduate Center in 1980. From 1980-83 he worked at Lockheed investigating algorithms for target recognition from time-varying imagery. In 1983 he joined AT&T Bell Labs where he was the Principal Investigator for AT&T's ASPEN project within the DARPA Strategic Computing Program, investigating parallel architectures and algorithms for pattern recognition. In 1987, he was appointed a Distinguished Member of the Technical Staff. In 1988, he joined the Speech Research Department at Bell Labs. He has served as a guest editor for the IEEE Transactions on Speech and Audio, and was a visiting researcher at the ATR Interpreting Telecommunications Research Laboratory in Japan. He is a member of the Acoustical Society of America, Association for Computational Linguistics and an IEEE Senior MemberHome page for Allen Gorin: http://www.research.att.com/info/algor.