Principal Warps: Thin-Plate Splines and the Decomposition of Deformations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Tracking brain deformations in time sequences of 3D US images
Pattern Recognition Letters - Speciqal issue: Ultrasonic image processing and analysis
Bayesian Estimation of Intra-operative Deformation for Image-Guided Surgery Using 3-D Ultrasound
MICCAI '00 Proceedings of the Third International Conference on Medical Image Computing and Computer-Assisted Intervention
Conditional filters for image sequence-based tracking - application to point tracking
IEEE Transactions on Image Processing
Cortical surface strain estimation using stereovision
MICCAI'11 Proceedings of the 14th international conference on Medical image computing and computer-assisted intervention - Volume Part I
Stereoscopic scene flow for robotic assisted minimally invasive surgery
MICCAI'12 Proceedings of the 15th international conference on Medical Image Computing and Computer-Assisted Intervention - Volume Part I
Hi-index | 0.00 |
Intraoperative brain deformations decrease accuracy in image-guided neurosurgery. Approaches to quantify these deformations based on 3-D reconstruction of cortectomy surfaces have been described and have shown promising results regarding the extrapolation to the whole brain volume using additional prior knowledge or sparse volume modalities. Quantification of brain deformations from surface measurement requires the registration of surfaces at different times along the surgical procedure, with different challenges according to the patient and surgical step. In this paper, we propose a new flexible surface registration approach for any textured point cloud computed by stereoscopic or laser range approach. This method includes three terms: the first term is related to image intensities, the second to Euclidean distance, and the third to anatomical landmarks automatically extracted and continuously tracked in the 2-D video flow. Performance evaluation was performed on both phantom and clinical cases. The global method, including textured point cloud reconstruction, had accuracy within 2 mm, which is the usual rigid registration error of neuronavigation systems before deformations. Its main advantage is to consider all the available data, including the microscope video flow with higher temporal resolution than previously published methods.