The Web: interactive and multimedia education
TNC'98 Proceedings of the TERENA networking conference '98 on Towards networking and services in the year 2001
A multi-touch three dimensional touch-sensitive tablet
CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
TouchLight: an imaging touch screen and display for gesture-based interaction
Proceedings of the 6th international conference on Multimodal interfaces
Visual touchpad: a two-handed gestural input device
Proceedings of the 6th international conference on Multimodal interfaces
Vision-Based Sensor for Real-Time Measuring of Surface Traction Fields
IEEE Computer Graphics and Applications
PlayAnywhere: a compact interactive tabletop projection-vision system
Proceedings of the 18th annual ACM symposium on User interface software and technology
Precise selection techniques for multi-touch screens
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Indirect mappings of multi-touch input using one and two hands
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Communications of the ACM - Surviving the data deluge
An effective multimedia item shell design for individualized education: the crome project
Advances in Multimedia
Interactive Multimedia for Adaptive Online Education
IEEE MultiMedia
Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking
ISWC '07 Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers
Template-Based hand pose recognition using multiple cues
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
Vision-Based interpretation of hand gestures for remote control of a computer mouse
ECCV'06 Proceedings of the 2006 international conference on Computer Vision in Human-Computer Interaction
Visual application in multi-touch tabletop for mathematics learning: a preliminary study
IVIC'11 Proceedings of the Second international conference on Visual informatics: sustaining research and innovations - Volume Part I
Hi-index | 0.00 |
Educational testing and learning have evolved from using standard True/False, fill-in-the-blank and multiple choice on paper to more visually enriched formats using interactive multimedia content on digital displays. However, traditional educational application interfaces are primarily mouse-driven, which prevents multiple users working simultaneously. Although touch-based displays have emerged and inspired new developments, they are mainly used in simple tasks. In this paper we show how the multi-touch technology can be extended to collaborative learning and testing at a larger scale, using an existing education implementation for illustration. We propose a Human-Intention-Machine-Interpretation (HIMI) model, which applies a graph-based approach to recognize hand gestures and interpret user intentions. Our focus is not to build a new multi-touch system but to make use of the existing multi-touch technology to enhance learning performance. The HIMI model not only facilitates natural interactions using hand movements on simple tasks, but also supports complex collaborative operations. Our contribution lies in embedding the multi-touch technology in multimedia education, providing a multi-user learning and testing environment which would not have been possible using traditional input devices. We formalize a conceptual model to uniquely interpret user intentions via touch states, state transitions and transition associations. We also propose a set of hand gestures for working with multimedia educational items. User evaluations are conducted to show the feasibility of the proposed hand gestures.