A Method for Registration of 3-D Shapes
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
Tangible bits: towards seamless interfaces between people, bits and atoms
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Sympathetic interfaces: using a plush toy to direct synthetic characters
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
IEEE Computer Graphics and Applications
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Layered acting for character animation
ACM SIGGRAPH 2003 Papers
Bilateral Filtering for Gray and Color Images
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Papier-Mache: toolkit support for tangible input
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Immersive authoring: What You eXperience Is What You Get (WYXIWYG)
Communications of the ACM - Designing for the mobile device
Video puppetry: a performative interface for cutout animation
ACM SIGGRAPH Asia 2008 papers
Real-time hand-tracking with a color glove
ACM SIGGRAPH 2009 papers
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
meshmixer: an interface for rapid mesh composition
ACM SIGGRAPH 2010 Talks
A puppet interface for retrieval of motion capture data
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Portico: tangible interaction on and around a tablet
Proceedings of the 24th annual ACM symposium on User interface software and technology
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
Proceedings of the 24th annual ACM symposium on User interface software and technology
Controller-free exploration of medical image data: Experiencing the Kinect
CBMS '11 Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems
Real-time human pose recognition in parts from single depth images
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
EGSR'09 Proceedings of the Twentieth Eurographics conference on Rendering
Closing the loop between intentions and actions
Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology
Hybrid interface design for distinct creative practices in real-time 3D filmmaking
Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction
GaussBits: magnetic tangible bits for portable and occlusion-free near-surface interactions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
eHeritage of shadow puppetry: creation and manipulation
Proceedings of the 21st ACM international conference on Multimedia
AMITIES: avatar-mediated interactive training and individualized experience system
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Instant user interfaces: repurposing everyday objects as input devices
Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces
Evaluating organic 3D sculpting using natural user interfaces with the Kinect
Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration
Hi-index | 0.00 |
We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.