ISWC '00 Proceedings of the 4th IEEE International Symposium on Wearable Computers
Hand tracking for low powered mobile AR user interfaces
AUIC '05 Proceedings of the Sixth Australasian conference on User interface - Volume 40
WUW - wear Ur world: a wearable gestural interface
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Imaginary interfaces: spatial interaction with empty hands and without visual feedback
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
OmniTouch: wearable multitouch interaction everywhere
Proceedings of the 24th annual ACM symposium on User interface software and technology
ShoeSense: a new perspective on gestural interaction and wearable applications
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Around device interaction for multiscale navigation
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Extending a mobile device's interaction space through body-centric interaction
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor
Proceedings of the 25th annual ACM symposium on User interface software and technology
Hi-index | 0.00 |
Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.