CASIS: a context-aware speech interface system

  • Authors:
  • Lee Hoi Leong;Shinsuke Kobayashi;Noboru Koshizuka;Ken Sakamura

  • Affiliations:
  • The University of Tokyo, Japan;The University of Tokyo, Japan and YRP Ubiquitous Networking Laboratory, Tokyo, Japan;The University of Tokyo, Japan and YRP Ubiquitous Networking Laboratory, Tokyo, Japan;The University of Tokyo, Japan and YRP Ubiquitous Networking Laboratory, Tokyo, Japan

  • Venue:
  • Proceedings of the 10th international conference on Intelligent user interfaces
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a robust natural language interface called CASIS for controlling devices in an intelligent environment. CASIS is novel in a sense that it integrates physical context acquired from the sensors embedded in the environment with traditionally used context to reduce the system error rate and disambiguate deictic references and elliptical inputs. The n-best result of the speech recognizer is re-ranked by a score calculated using a Bayesian network consisting of information from the input utterance and context. In our prototype system that uses device states, brightness, speaker location, chair occupancy, speech direction and action history as context, the system error rate has been reduced by 41% compared to a baseline system that does not leverage on context information.