Towards computational understanding of skill levels in simulation-based surgical training via automatic video analysis

  • Authors:
  • Qiang Zhang;Baoxin Li

  • Affiliations:
  • Computer Science & Engineering, Arizona State University, Tempe, AZ;Computer Science & Engineering, Arizona State University, Tempe, AZ

  • Venue:
  • ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part III
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Analysis of motion expertise is an important problem in many domains including sports and surgery. Recent years, surgical simulation has emerged at the forefront of new technologies for improving the education and training of surgical residents. In simulation-based surgical training, a key task is to rate the performance of the operator, which is done currently by senior surgeons. This is deemed as a costly practice and researchers have been working towards building automated systems to achieve computational understanding of surgical skills, largely through analysis of motion data captured by video or data of other modalities. This paper presents our study on understanding a fundamental issue in building such automated systems: how visual features computed from videos capturing surgical actions may be related to the motion expertise of the operator. Utilizing domain-speciffic knowledge, we propose algorithms for detecting visual features that support understanding the skill of the operator. A set of video streams captured from resident surgeons in two local hospitals were employed in our analysis. The experiments revealed useful observations on potential correlations between computable visual features and the motion expertise of the subjects, hence leading to insights into how to build automatic system for solving the problem of expertise evaluation.