Modeling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differences

  • Authors:
  • Benfang Xiao;Rebecca Lunsford;Rachel Coulston;Matt Wesson;Sharon Oviatt

  • Affiliations:
  • Oregon Health and Science University, OGI School of Science & Eng., Beaverton, OR;Oregon Health and Science University, OGI School of Science & Eng., Beaverton, OR;Oregon Health and Science University, OGI School of Science & Eng., Beaverton, OR;Oregon Health and Science University, OGI School of Science & Eng., Beaverton, OR;Oregon Health and Science University, OGI School of Science & Eng., Beaverton, OR

  • Venue:
  • Proceedings of the 5th international conference on Multimodal interfaces
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal interfaces are designed with a focus on flexibility, although very few currently are capable of adapting to major sources of user, task, or environmental variation. The development of adaptive multimodal processing techniques will require empirical guidance from quantitative modeling on key aspects of individual differences, especially as users engage in different types of tasks in different usage contexts. In the present study, data were collected from fifteen 66- to 86-year-old healthy seniors as they interacted with a map-based flood management system using multimodal speech and pen input. A comprehensive analysis of multimodal integration patterns revealed that seniors were classifiable as either simultaneous or sequential integrators, like children and adults. Seniors also demonstrated early predictability and a high degree of consistency in their dominant integration pattern. However, greater individual differences in multimodal integration generally were evident in this population. Perhaps surprisingly, during sequential constructions seniors' intermodal lags were no longer in average and maximum duration than those of younger adults, although both of these groups had longer maximum lags than children. However, an analysis of seniors' performance did reveal lengthy latencies before initiating a task, and high rates of self talk and task-critical errors while completing spatial tasks. All of these behaviors were magnified as the task difficulty level increased. Results of this research have implications for the design of adaptive processing strategies appropriate for seniors' applications, especially for the development of temporal thresholds used during multimodal fusion. The long-term goal of this research is the design of high-performance multimodal systems that adapt to a full spectrum of diverse users, supporting tailored and robust future systems.