A fast parallel algorithm for thinning digital patterns
Communications of the ACM
Skeleton Pruning by Contour Partitioning with Discrete Curve Evolution
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experiments in 3D interaction for mobile phone AR
Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia
One-handed interaction with augmented virtual objects on mobile devices
VRCAI '08 Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry
Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking
ISWC '07 Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers
Markerless visual fingertip detection for natural mobile device interaction
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
6D hands: markerless hand-tracking for computer aided design
Proceedings of the 24th annual ACM symposium on User interface software and technology
Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor
Proceedings of the 25th annual ACM symposium on User interface software and technology
Real-time hand interaction for augmented reality on mobile phones
Proceedings of the 2013 international conference on Intelligent user interfaces
Hi-index | 0.00 |
In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.