A framework for designing intelligent task-oriented augmented reality user interfaces

  • Authors:
  • Leonardo Bonanni;Chia-Hsun Lee;Ted Selker

  • Affiliations:
  • MIT Media Laboratory, Cambridge, MA;MIT Media Laboratory, Cambridge, MA;MIT Media Laboratory, Cambridge, MA

  • Venue:
  • Proceedings of the 10th international conference on Intelligent user interfaces
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A task-oriented space can benefit from an augmented reality interface that layers the existing tools and surfaces with useful information to make cooking more easy, safe and efficient. To serve experienced users as well as novices, augmented reality interfaces need to adapt modalities to the user's expertise and allow for multiple ways to perform tasks. We present a framework for designing an intelligent user interface that informs and choreographs multiple tasks in a single space according to a model of tasks and users. A residential kitchen has been outfitted with systems to gather data from tools and surfaces and project multi-modal interfaces back onto the tools and surfaces themselves. Based on user evaluations of this augmented reality kitchen, we propose a system to tailor information modalities based on the spatial and temporal qualities of the task, and the expertise, location and progress of the user. The intelligent augmented reality user interface choreographs multiple tasks in the same space at the same time.