IEEE Computer Graphics and Applications
VIDEOPLACE—an artificial reality
CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
DiamondTouch: a multi-user touch technology
Proceedings of the 14th annual ACM symposium on User interface software and technology
Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays
Proceedings of the 16th annual ACM symposium on User interface software and technology
FingARtips: gesture based direct manipulation in Augmented Reality
Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia
Visual tracking of bare fingers for interactive surfaces
Proceedings of the 17th annual ACM symposium on User interface software and technology
Beyond "social protocols": multi-user coordination policies for co-located groupware
CSCW '04 Proceedings of the 2004 ACM conference on Computer supported cooperative work
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Simultaneous 4 gestures 6 DOF real-time two-hand tracking without any markers
Proceedings of the 2007 ACM symposium on Virtual reality software and technology
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Real-time hand-tracking with a color glove
ACM SIGGRAPH 2009 papers
Enhancing input on and above the interactive surface with muscle sensing
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Hand distinction for multi-touch tabletop interaction
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Left and right hand distinction for multi-touch displays
SG'11 Proceedings of the 11th international conference on Smart graphics
Usage and recognition of finger orientation for multi-touch tabletop interaction
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part III
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part III
Designing user-, hand-, and handpart-aware tabletop interactions with the TouchID toolkit
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Augmenting touch interaction through acoustic sensing
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
See me, see you: a lightweight method for discriminating user touches on tabletop displays
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Kin'touch: understanding how visually impaired people explore tactile maps
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Magic finger: always-available input through finger instrumentation
Proceedings of the 25th annual ACM symposium on User interface software and technology
MTi: A method for user identification for multitouch displays
International Journal of Human-Computer Studies
Fiberio: a touchscreen that senses fingerprints
Proceedings of the 26th annual ACM symposium on User interface software and technology
Collaborative smart virtual keyboard with word predicting function
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
An approach for designing and evaluating a plug-in vision-based tabletop touch identification system
Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration
Left and right hand distinction for multi-touch tabletop interactions
Proceedings of the 19th international conference on Intelligent User Interfaces
Using multiple sensors for reliable markerless identification through supervised learning
Machine Vision and Applications
Hi-index | 0.01 |
The hand has incredible potential as an expressive input device. Yet most touch technologies imprecisely recognize limited hand parts (if at all), usually by inferring the hand part from the touch shapes. We introduce the fiduciary-tagged glove as a reliable, inexpensive, and very expressive way to gather input about: (a) many parts of a hand (fingertips, knuckles, palms, sides, backs of the hand), and (b) to discriminate between one person's or multiple peoples' hands. Examples illustrate the interaction power gained by being able to identify and exploit these various hand parts.