Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Learning how: the search for craft knowledge on the internet
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Real-time human pose recognition in parts from single depth images
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
TeleAdvisor: a versatile augmented reality tool for remote assistance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Steerable augmented reality with the beamatron
Proceedings of the 25th annual ACM symposium on User interface software and technology
Hi-index | 0.00 |
A large community of users creates and shares how-to videos online. Many of these videos show demonstrations of physical tasks, such as fixing a machine, assembling furniture, or demonstrating dance steps. It is often difficult for the authors of these videos to control camera focus, view, and position while performing their tasks. To help authors produce videos, we introduce Kinectograph, a recording device that automatically pans and tilts to follow specific body parts, e.g., hands, of a user in a video. It utilizes a Kinect depth sensor to track skeletal data and adjusts the camera angle via a 2D pan-tilt gimbal mount. Users control and configure Kinectograph through a tablet application with real-time video preview. An informal user study suggests that users prefer to record and share videos with Kinectograph, as it enables authors to focus on performing their demonstration tasks.