VIS-a-VE: visual augmentation for virtual environments in surgical training

  • Authors:
  • Adrian J. Chung;Fani Deligianni;Pallav Shah;Athol Wells;Guang-Zhong Yang

  • Affiliations:
  • Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Imperial College, London, UK;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Imperial College, London, UK;Royal Brompton and Harefield NHS Trust;Royal Brompton and Harefield NHS Trust;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Imperial College, London, UK

  • Venue:
  • EUROVIS'05 Proceedings of the Seventh Joint Eurographics / IEEE VGTC conference on Visualization
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Photo-realistic rendering combined with vision techniques is an important trend in developing next generation surgical simulation devices. Training with simulator is generally low in cost and more efficient than traditional methods that involve supervised learning on actual patients. Incorporating genuine patient data in the simulation can significantly improve the efficacy of training and skills assessment. In this paper, a photo-realistic simulation architecture is described that utilises patient-specific models for training in minimally invasive surgery. The datasets are constructed by combining computer tomographic images with bronchoscopy video of the same patient so that the three dimensional structures and visual appearance are accurately matched. Using simulators enriched by a library of datasets with sufficient patient variability, trainees can experience a wide range of realistic scenarios, including rare pathologies, with correct visual information. In this paper, the matching of CT and video data is accomplished by using a newly developed 2D/3D registration method that exploits a shape from shading similarity measure. Additionally, a method has been devised to allow shading parameter estimation by modelling the bidirectional reflectance distribution function (BRDF) of the visible surfaces. The derived BRDF is then used to predict the expected shading intensity such that a texture map independent of lighting conditions can be extracted. Thus new views can be generated that were not captured in the original bronchoscopy video, thus allowing free navigation of the acquired 3D model with enhanced photo-realism.