3D Object Retrieval by Shape Similarity
DEXA '02 Proceedings of the 13th International Conference on Database and Expert Systems Applications
A Quick 3D-to-2D Points Matching Based on the Perspective Projection
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing
An Adaptive Technique for Accurate Feature Extraction from Regular and Irregular Image Data
ICIAP '09 Proceedings of the 15th International Conference on Image Analysis and Processing
Gradient operators for feature extraction and characterisation in range images
Pattern Recognition Letters
Multi-scale edge detection on range and intensity images
Pattern Recognition
Iterative 3D point-set registration based on hierarchical vertex signature (HVS)
MICCAI'05 Proceedings of the 8th international conference on Medical image computing and computer-assisted intervention - Volume Part II
Robust occluded shape recognition
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
This paper describes a method for determining an object's pose given its 3D model and a 2D view. This 2D-3D registration problem arises in a number of medical applications, e.g. image guided spine procedures. Previous approaches often rely on a good initial estimate of the pose parameters and an optimization procedure to refine this initial pose estimate, e.g. the iterative closest point (ICP). However, such algorithms can identify local minima as global minima, leading to registration errors, if the initial pose is not carefully chosen. The specification of the appropriate initial conditions however requires user interaction and is time consuming. We propose an approach where sample 2D views are generated from the 3D model and matched against the given view (2D-3D registration). Additional views are then generated near the best view and the procedure is repeated until convergence. Results of estimating the coordinates of a vertebrae spine bone from its 3D model, obtained from volumetric (CT or MR) data, and a 2D view, as might be obtained from fluoroscopic data, demonstrates that the pose can be reliably obtained without requiring extensive user interface.