CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Accelerometer-based gesture control for a design environment
Personal and Ubiquitous Computing
A non-contact mouse for surgeon-computer interaction
Technology and Health Care
Learning Generative Models for Multi-Activity Body Pose Estimation
International Journal of Computer Vision
uWave: Accelerometer-based personalized gesture recognition and its applications
Pervasive and Mobile Computing
Multiple-activity human body tracking in unconstrained environments
AMDO'10 Proceedings of the 6th international conference on Articulated motion and deformable objects
Exploring the potential for touchless interaction in image-guided interventional radiology
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A flexible platform for developing context-aware 3D gesture-based interfaces
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
Hi-index | 0.00 |
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.