Multimodal interaction: a new focal area for AI

  • Authors:
  • Philip R. Cohen

  • Affiliations:
  • Center for Human-Computer Communication, Oregon Graduate Institute of Science and Technology

  • Venue:
  • IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

AI research has often been driven by popular visions - HAL 2001, Asimov's Robot, Star Trek - and by critical application areas - medical expert systems, spoken dialogue systems, etc. These visions and applications serve to inspire and guide researchers, posing challenges, illustrating technical weaknesses, and generally channeling creative energy. Without doubt, the widely held vision of the autonomous robot, has exerted a substantial integrative force, such that numerous disciplines, ranging from mechanical engineering to cognitive science, can see how their intellectual endeavors can contribute to the overall endeavor. In this brief position paper, and in the accompanying talk, I would like to propose that the next generation of intelligent multimodal user interfaces can offer a similar intellectual focus for AI researchers. After providing a brief overview of our work in this area and two examples, I would like to suggest the potential impact that such interfaces could have in the relatively near-term.