Evaluation of gesture based interfaces for medical volume visualization tasks

  • Authors:
  • Can Kirmizibayrak;Nadezhda Radeva;Mike Wakid;John Philbeck;John Sibert;James Hahn

  • Affiliations:
  • The George Washington University, Washington, DC;The George Washington University, Washington, DC;The George Washington University, Washington, DC;The George Washington University, Washington, DC;The George Washington University, Washington, DC;The George Washington University, Washington, DC

  • Venue:
  • Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Physicians are accustomed to using volumetric datasets for medical assessment, diagnosis and treatment. These modalities can be displayed with 3D computer visualizations for physicians to study the overall shape and internal anatomical structures. Gesture-based interfaces can be beneficial to interact with these kinds of visualizations in a variety of medical settings. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperformed the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the better interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization.