Learning gestures for customizable human-computer interaction in the operating room

  • Authors:
  • Loren Arthur Schwarz;Ali Bigdelou;Nassir Navab

  • Affiliations:
  • Computer Aided Medical Procedures, Technische Universität München, Germany;Computer Aided Medical Procedures, Technische Universität München, Germany;Computer Aided Medical Procedures, Technische Universität München, Germany

  • Venue:
  • MICCAI'11 Proceedings of the 14th international conference on Medical image computing and computer-assisted intervention - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.