Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An HMM-Based Threshold Model Approach for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust Real-Time Face Detection
International Journal of Computer Vision
M/ORIS: a medical/operating room interaction system
Proceedings of the 6th international conference on Multimodal interfaces
A non-contact mouse for surgeon-computer interaction
Technology and Health Care
Gesture spotting with body-worn inertial sensors to detect user activities
Pattern Recognition
Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Hand trajectory-based gesture spotting and recognition using HMM
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
A Framework for Hand Gesture Recognition and Spotting Using Sub-gesture Modeling
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Exploring the potential for touchless interaction in image-guided interventional radiology
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Vision-based user interfaces for health applications: a survey
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part I
Gestonurse: a multimodal robotic scrub nurse
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Controller-free exploration of medical image data: Experiencing the Kinect
CBMS '11 Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems
Interaction proxemics and image use in neurosurgery
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE Transactions on Robotics
Hi-index | 0.10 |
A sterile, intuitive context-integrated system for navigating MRIs through freehand gestures during a neurobiopsy procedure is presented. Contextual cues are used to determine the intent of the user to improve continuous gesture recognition, and the discovery and exploration of MRIs. One of the challenges in gesture interaction in the operating room is to discriminate between intentional and non-intentional gestures. This problem is also referred as spotting. In this paper, a novel method for training gesture spotting networks is presented. The continuous gesture recognition system was shown to successfully detect gestures 92.26% of the time with a reliability of 89.97%. Experimental results show that significant improvements in task completion time were obtained through the effect of context integration.