Navigating a 3D virtual environment of learning objects by hand gestures

  • Authors:
  • Qing Chen;A.S.M. Mahfujur Rahman;Xiaojun Shen;Abdulmotaleb El Saddik;Nicolas D. Georganas

  • Affiliations:
  • DiscoverLab, MCRLab, School of Information Technology and Engineering, University of Ottawa, 800 King Edward, Ottawa Ontario, K1N 6N5, Canada.;DiscoverLab, MCRLab, School of Information Technology and Engineering, University of Ottawa, 800 King Edward, Ottawa Ontario, K1N 6N5, Canada.;DiscoverLab, MCRLab, School of Information Technology and Engineering, University of Ottawa, 800 King Edward, Ottawa Ontario, K1N 6N5, Canada.;DiscoverLab, MCRLab, School of Information Technology and Engineering, University of Ottawa, 800 King Edward, Ottawa Ontario, K1N 6N5, Canada.;DiscoverLab, MCRLab, School of Information Technology and Engineering, University of Ottawa, 800 King Edward, Ottawa Ontario, K1N 6N5, Canada

  • Venue:
  • International Journal of Advanced Media and Communication
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a gesture-based Human-Computer Interface (HCI) to navigate a learning object repository mapped in a 3D virtual environment. With this interface, the user can access the learning objects by controlling an avatar car using gestures. The Haar-like features and the AdaBoost learning algorithm are used for our gesture recognition to achieve real-time performance and high recognition accuracy. The learning objects are represented by different traffic signs, which are grouped along the virtual highways. Compared with traditional HCI devices such as keyboards, it is more intuitive and interesting for users using hand gestures to communicate with the virtual environments.