Toward adaptive information fusion in multimodal systems

  • Authors:
  • Sharon Oviatt

  • Affiliations:
  • Department of Computer Science and Engineering, Oregon Health and Science University, Beaverton, OR

  • Venue:
  • MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Techniques for information fusion are at the heart of multimodal system design. To develop new user-adaptive approaches for multimodal fusion, our lab has investigated the stability and basis of major individual differences that have been documented in users' multimodal integration patterns. In this talk, I summarized the following: (1) there are large individual differences in users' dominant speech and pen multimodal integration pattern, such that individual users can be classified as either simultaneous or sequential integrators (Oviatt, 1999; Oviatt et al., 2003), (2) users' dominant integration pattern can be identified almost immediately (i.e., upon first interaction with computer), and it remains highly consistent over a session (Oviatt et al., 2003; Oviatt et al., 2005b), (3) users' dominant integration pattern also remains stable across their lifespan (Oviatt et al., 2003; Oviatt et al., 2005b), (4) users' dominant integration pattern is highly resistant to change, even when they are given strong selective reinforcement or explicit instructions to switch patterns (Oviatt et al., 2003; Oviatt et al., 2005a), (5) when users encounter cognitive load (e.g., due to increasing task difficulty, or system recognition errors), their dominant multimodal integration pattern entrenches or becomes "hypertimed," (Oviatt et al., 2003; Oviatt et al., 2004), and (6) users' distinctive integration patterns appear to derive from enduring differences in basic reflective-impulsive cognitive style (Oviatt et al., 2005b).