Grasping reality through illusion—interactive graphics serving science
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Theoretical Computer Science
VRST '00 Proceedings of the ACM symposium on Virtual reality software and technology
The human-computer interaction handbook
A Human-Machine Interface for Medical Image Analysis and Visualization in Virtual Environments
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97) -Volume 4 - Volume 4
Guidelines for multimodal user interface design
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Optimal Hand Gesture Vocabulary Design Using Psycho-Physiological and Technical Factors
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Human-Computer Interaction (3rd Edition)
Human-Computer Interaction (3rd Edition)
A Gesture Interface for Radiological Workstations
CBMS '07 Proceedings of the Twentieth IEEE International Symposium on Computer-Based Medical Systems
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
A non-contact mouse for surgeon-computer interaction
Technology and Health Care
Hi-index | 0.00 |
Within crime scene analysis, a framework providing interactive visualization and gesture based manipulation of virtual objects, while still seeing the real environment, seems a useful approach for the interpretation of cues and for instructional purposes as well. This paper presents a framework providing a collection of techniques to enhance reliability, accuracy and overall effectiveness of gesture-based interaction, applied to an interactive interpretation and evaluation of a crime scene in an augmented reality environment. The interface layout is visualized via a stereoscopic see-through capable Head Mounted Display (HMD), projecting graphics in the central region of the user's field of view, floating in a close-at-hand volume. The interaction paradigm concurrently exploits both hands to perform precise manipulation of 3D models of objects, eventually present on the crime scene, or even distance/angular measurements, allowing to formulate visual hypothesis with the lowest interaction effort. A real-time adaptation of interaction to the user's needs is performed by monitoring hands and fingers' dynamics, in order to allow both complex actions (like the above mentioned manipulation or measurement) and conventional keyboard-like operations.