Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Toolglass and magic lenses: the see-through interface
SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques
Passive real-world interface props for neurosurgical visualization
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 9th annual ACM symposium on User interface software and technology
Two pointer input for 3D interaction
Proceedings of the 1997 symposium on Interactive 3D graphics
Tutorial: Imaging and Visualization in Medical Education
IEEE Computer Graphics and Applications
3D interaction with volumetric medical data: experiencing the Wiimote
Proceedings of the 1st international conference on Ambient media and systems
Comparing low cost input devices for interacting with 3D virtual environments
HSI'09 Proceedings of the 2nd conference on Human System Interactions
3D spatial touch system based on time-of-flight camera
WSEAS Transactions on Information Science and Applications
Hand gesture recognition with a novel IR time-of-flight range camera: a pilot study
MIRAGE'07 Proceedings of the 3rd international conference on Computer vision/computer graphics collaboration techniques
Exploring the potential for touchless interaction in image-guided interventional radiology
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Fleshing out gestures: augmenting 3D interaction
Proceedings of the Fifth International C* Conference on Computer Science and Software Engineering
A study on the degrees of freedom in touchless interaction
SIGGRAPH Asia 2013 Technical Briefs
Hi-index | 0.00 |
Physicians are accustomed to using volumetric datasets for medical assessment, diagnosis and treatment. These modalities can be displayed with 3D computer visualizations for physicians to study the overall shape and internal anatomical structures. Gesture-based interfaces can be beneficial to interact with these kinds of visualizations in a variety of medical settings. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperformed the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the better interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization.