SmartSkin: an infrastructure for freehand manipulation on interactive surfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays
Proceedings of the 16th annual ACM symposium on User interface software and technology
Two-handed interaction on a tablet display
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces
TABLETOP '06 Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems
Cooperative gestures: multi-user gestural interactions for co-located groupware
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Exploring gestural mode of interaction with mobile phones
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Application oriented semantic multi-touch gesture description method
ICIC'10 Proceedings of the Advanced intelligent computing theories and applications, and 6th international conference on Intelligent computing
Towards a formalization of multi-touch gestures
ACM International Conference on Interactive Tabletops and Surfaces
Toward localizing audiences' gaze using a multi-touch electronic whiteboard with sPieMenu
Proceedings of the 16th international conference on Intelligent user interfaces
Proton: multitouch gestures as regular expressions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimedia Tools and Applications
Spring: a solution for managing the third DOF with tactile interface
Proceedings of the 10th asia pacific conference on Computer human interaction
Formal description of multi-touch interactions
Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems
Interactive prototyping of tabletop and surface applications
Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems
Framed guessability: using embodied allegories to increase user agreement on gesture sets
Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction
Hi-index | 0.00 |
Media platforms and devices that allow an input from a user's finger/hand touch are becoming more ubiquitous, such as Microsoft Surface and DiamondTouch, as well as numerous experimental systems in research labs. Currently the definition of touch styles is application-specific and each device/application has its own set of available touch types to be recognized as input. In this paper we attempt a comprehensive understanding of all possible touch types for touch-sensitive devices by constructing a design model for touch interaction and clarifying their characteristics. The model is composed of three structural levels (action level, motivation level and computing level) and the relationships between them (mapping). In action level, we construct a unified definition and description of all possible touch gestures, first by analyzing how a finger/hand touch on a surface can cause a particular event that can be recognized as a legitimate action, and then using this analysis we define all possible touch gestures, resulting in touch gesture taxonomy. In motivation level, we analyze and describe all the direct interactive motivation according to applications. Then we define the general principles for mapping between the action and motivation levels. In computing level, we realize the motivation and response to gestural inputs using computer languages. The model is then used to illustrate how it can be interpreted in the context of a photo management application based on DiamondTouch and iPod Touch. It allows to reuse touch types in different platforms and applications in a more systematic and generic manner than how touch has been designed so far.