A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Readings in information visualization: using vision to think
Readings in information visualization: using vision to think
Reusable learning objects: a survey of LOM-based repositories
Proceedings of the tenth ACM international conference on Multimedia
Invited Speech: "Gestural Interface to a visual computing Environment for Molecular biologists"
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Tracking Articulated Hand Motion with Eigen Dynamics Analysis
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Using Information Visualization for Accessing Learning Object Repositories
IV '04 Proceedings of the Information Visualisation, Eighth International Conference
Real-Time Gesture Recognition by Learning and Selective Control of Visual Interest Points
IEEE Transactions on Pattern Analysis and Machine Intelligence
Visualizing Web Search Results in 3D
Computer
3-D hand posture recognition by training contour variation
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Authoring edutainment content through video annotations and 3D model augmentation
VECIMS'09 Proceedings of the 2009 IEEE international conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems
Mobile based multimodal retrieval and navigation of learning objects using a 3D car metaphor
Proceedings of the Third International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
This paper presents a gesture-based Human-Computer Interface (HCI) to navigate a learning object repository mapped in a 3D virtual environment. With this interface, the user can access the learning objects by controlling an avatar car using gestures. The Haar-like features and the AdaBoost learning algorithm are used for our gesture recognition to achieve real-time performance and high recognition accuracy. The learning objects are represented by different traffic signs, which are grouped along the virtual highways. Compared with traditional HCI devices such as keyboards, it is more intuitive and interesting for users using hand gestures to communicate with the virtual environments.