Multi-Modal Human Interactions with an Intelligent Interface Utilizing Images, Sounds, and Force Feedback

  • Authors:
  • Fei He;Arvin Agah

  • Affiliations:
  • Ernst &/ Young LLP, Kansas City, MO, U.S.A.;Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS 66045, U.S.A/ e-mail: agah@ukans.edu

  • Venue:
  • Journal of Intelligent and Robotic Systems
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

One goal of research in the area of human–machine interaction is to improve the ways a human user interacts with a computer through a multimedia interface. This interaction comprises of not only text, graphical animation, stereo sounds, and live video images, but also force and haptic feedback, which can provide more “real” feeling to the user. The force feedback joystick, a human interface device is an input–output device. It not only tracks user's physical manipulation input, but also provides realistic physical sensations of force coordinated with system output. As part of our research, we have developed a multimedia computer game that assimilates images, sounds, and force feedback. We focused on the issues of how to combine these media to allow the user feel the compliance, damping, and vibration effects through the force feedback joystick. We conducted series of human subject experiments that incorporated different combinations of media, including the comparative study of the different performances of 60 human users, aiming to answer the question: What are the effects of force feedback (and associated time delays) when used in combination with visual and auditory information as part of a multi-modal interface? It is hoped that these results can be utilized in the design of enhanced multimedia systems that incorporate force feedback.