Gesture Based Interface for Crime Scene Analysis: A Proposal

  • Authors:
  • Andrea F. Abate;Maria Marsico;Stefano Levialdi;Vincenzo Mastronardi;Stefano Ricciardi;Genoveffa Tortora

  • Affiliations:
  • Dipartimento di Matematica e Informatica, Università degli Studi di Salerno, Fisciano (SA), Italy 20186;Dipartimento di Informatica, Università degli Studi di Roma "La Sapienza", Roma 00198;Dipartimento di Informatica, Università degli Studi di Roma "La Sapienza", Roma 00198;Dipartimento di Scienze Psichiatriche e Medicina Psicologica, Università degli Studi di Roma "La Sapienza", Roma 00185;Dipartimento di Matematica e Informatica, Università degli Studi di Salerno, Fisciano (SA), Italy 20186;Dipartimento di Matematica e Informatica, Università degli Studi di Salerno, Fisciano (SA), Italy 20186

  • Venue:
  • ICCSA '08 Proceedings of the international conference on Computational Science and Its Applications, Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Within crime scene analysis, a framework providing interactive visualization and gesture based manipulation of virtual objects, while still seeing the real environment, seems a useful approach for the interpretation of cues and for instructional purposes as well. This paper presents a framework providing a collection of techniques to enhance reliability, accuracy and overall effectiveness of gesture-based interaction, applied to an interactive interpretation and evaluation of a crime scene in an augmented reality environment. The interface layout is visualized via a stereoscopic see-through capable Head Mounted Display (HMD), projecting graphics in the central region of the user's field of view, floating in a close-at-hand volume. The interaction paradigm concurrently exploits both hands to perform precise manipulation of 3D models of objects, eventually present on the crime scene, or even distance/angular measurements, allowing to formulate visual hypothesis with the lowest interaction effort. A real-time adaptation of interaction to the user's needs is performed by monitoring hands and fingers' dynamics, in order to allow both complex actions (like the above mentioned manipulation or measurement) and conventional keyboard-like operations.