The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gesture Modeling and Recognition Using Finite State Machines
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays
Proceedings of the 16th annual ACM symposium on User interface software and technology
Interacting with large displays from a distance with vision-tracked multi-finger gestural input
Proceedings of the 18th annual ACM symposium on User interface software and technology
Low-cost multi-touch sensing through frustrated total internal reflection
Proceedings of the 18th annual ACM symposium on User interface software and technology
Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces
TABLETOP '06 Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems
Multi-finger cursor techniques
GI '06 Proceedings of Graphics Interface 2006
Gesture Recognition using Hidden Markov Models from Fragmented Observations
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Building ontologies from relational databases using reverse engineering methods
CompSysTech '07 Proceedings of the 2007 international conference on Computer systems and technologies
Sign Language Recognition by Combining Statistical DTW and Independent Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multi-touch gestural interaction in X3D using hidden Markov models
Proceedings of the 2008 ACM symposium on Virtual reality software and technology
Real time gesture recognition using continuous time recurrent neural networks
Proceedings of the ICST 2nd international conference on Body area networks
Taiwan sign language (TSL) recognition based on 3D data and neural networks
Expert Systems with Applications: An International Journal
Multi-touch interaction for robot control
Proceedings of the 14th international conference on Intelligent user interfaces
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A gestural interaction design model for multi-touch displays
Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology
Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Multi-touch interaction for tasking robots
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
A survey on vision-based human action recognition
Image and Vision Computing
A novel brain-computer interface using a multi-touch surface
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multi-touch techniques for exploring large-scale 3D astrophysical simulations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimedia Tools and Applications
PReMI'05 Proceedings of the First international conference on Pattern Recognition and Machine Intelligence
MPEG-7 visual motion descriptors
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
In order to customize multi-touch gestures for different applications, and facilitate multi-touch gesture recognition, an application oriented and shape feature based multi-touch gesture description and recognition method is proposed. In this method, multi-touch gestures are classified into two categories, namely atomic gesture and combined gesture, where combined gesture is a combination of atomic gestures using temporal, spatial and logical relationships. For description, users' motions are mapped into gestures, and then semantic constraints of an application are extracted to build the accessible relationships between gestures and entity states. For recognition, trajectories of a gesture are projected onto an image, and the shape feature of every trajectory and relationships between each other are extracted to match with gesture templates. Experiments show that this method is independent to multi-touch platforms, robust to manipulating differences of users, and it is scalable and reusable for users and applications.