“Body coupled FingerRing”: wireless wearable keyboard
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Towards usable VR: an empirical study of user interfaces for immersive virtual environments
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
ISWC '99 Proceedings of the 3rd IEEE International Symposium on Wearable Computers
ISWC '00 Proceedings of the 4th IEEE International Symposium on Wearable Computers
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Visual touchpad: a two-handed gestural input device
Proceedings of the 6th international conference on Multimodal interfaces
PlayAnywhere: a compact interactive tabletop projection-vision system
Proceedings of the 18th annual ACM symposium on User interface software and technology
WUW - wear Ur world: a wearable gestural interface
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Seeing double: reconstructing obscured typed input from repeated compromising reflections
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security
Hi-index | 0.00 |
We propose the Wearable Virtual Tablet (WVT), where a user can draw a locus on a common object with a plane surface (e.g., a notebook and a magazine) with a fingertip. Our previous WVT[1], however, could not work on a plane surface with complicated texture patterns: Since our WVT employs an active-infrared camera and the reflected infrared rays vary depending on patterns on a plane surface, it is difficult to estimate the motions of a fingertip and a plane surface from an observed infrared-image. In this paper, we propose a method to detect and track their motions without interference from colored patterns on a plane surface. (1) To find the region of a plane object in the observed image, four edge lines that compose a rectangular object can be easily extracted by employing the properties of an active-infrared camera. (2) To precisely determine the position of a fingertip, we utilize a simple finger model that corresponds to a finger edge independent of its posture. (3) The system can distinguish whether or not a fingertip touches a plane object by analyzing image intensities in the edge region of the fingertip.