Augmented reality environment for life support training

  • Authors:
  • Fabrício Pretto;Isabel Harb Manssour;Maria H. Itaqui Lopes;Emerson Rodrigues da Silva;Márcio Sarroglia Pinho

  • Affiliations:
  • PPGCC/PUCRS, POA--Brazil;FACIN/PUCRS, POA--Brazil;FAMED/PUCRS, POA--Brazil;FAMED/PUCRS, POA--Brazil;PPGCC/PUCRS, POA-Brazil

  • Venue:
  • Proceedings of the 2009 ACM symposium on Applied Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The area of Medical Qualification in Life Support training is being constantly improved. However, many problems still have to be faced in the training sessions. During these sessions the students or physicians can repetitively practice patient care procedures in simulated scenarios using anatomical manikins, especially designed for this type of training. Current manikins have several resources incorporated to allow and facilitate qualified training, such as pulse, arrhythmia and auscultation simulator. However, some deficiencies have been detected in the existing LS training structure. For example: automatic feedback to the students in consequence of their actions on the manikin, images like facial expressions and body injuries, and their combination with sounds that represent the clinical state of the patient. The main goal of the ARLIST project is to qualify the traditional training environment currently used for LS training, introducing image and sound resources into the training manikins. Thought these features we can simulate some aspects such as facial expressions, skin color changes and scratches and skin injuries through image projection over the manikin body, and also play sounds like cries of pain or groans of an injured man.