A New Sense for Depth of Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
Handbook of mathematics (3rd ed.)
Handbook of mathematics (3rd ed.)
Use of the Hough transformation to detect lines and curves in pictures
Communications of the ACM
Bundle Adjustment - A Modern Synthesis
ICCV '99 Proceedings of the International Workshop on Vision Algorithms: Theory and Practice
A Four-step Camera Calibration Procedure with Implicit Image Correction
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
A new accurate and flexible model based multi-corner detector for measurement and recognition
Pattern Recognition Letters
Which pattern? Biasing aspects of planar calibration patterns and detection methods
Pattern Recognition Letters
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
ChESS - Quick and robust detection of chess-board features
Computer Vision and Image Understanding
Hi-index | 0.10 |
In this article we describe a novel approach to obtain the position of a chequerboard corner at sub-pixel accuracy from digital images. Applications of this method include photogrammetric scene reconstruction, pose estimation, self localisation of (mobile) robots, and camera calibration. Chequerboard patterns are especially suitable for calibrating non-pinhole cameras such as fisheye or catadioptric cameras. We model the grey values of an imaged corner by a simulated imaging process. In order to obtain an efficient implementation on standard hardware, several approximations are presented. The grey value model is used to perform a least-squares fit to the input image using a Levenberg-Marquardt optimisation. The model is described by four geometric parameters (position, rotation, and skew angle of the chequerboard corner), the width of the point spread function, and two photometric parameters (gain and offset). We compare our non-linear algorithm with two linear chequerboard corner localisation algorithms and the classical localisation of photogrammetric circular targets. Ground truth is obtained by mechanically moving a target pattern in front of the camera at sub-pixel accuracy. The corner localisation algorithm is then used to measure the displacement. On the average, our algorithm achieves a displacement error (half the difference between the 75% and 25% quantiles) of 0.032pixels, while it becomes 0.024pixels for high contrast and 0.043pixels for low contrast conditions. The classical photogrammetric method based on circular targets achieves 0.045pixels in the average case, 0.017pixels under high contrast and 0.132pixels under low contrast conditions. The actual positional errors of the corner point positions are lower by a factor of 1/2 than the measured displacement errors.