Probability (2nd ed.)
Matrix computations (3rd ed.)
In Defense of the Eight-Point Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Heteroscedastic Regression in Computer Vision: Problems with Bilinear Constraint
International Journal of Computer Vision - Special issue on a special section on visual surveillance
On the Fitting of Surfaces to Data with Covariances
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rationalising the Renormalisation Method of Kanatani
Journal of Mathematical Imaging and Vision
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Statistical Optimization for Geometric Computation: Theory and Practice
Statistical Optimization for Geometric Computation: Theory and Practice
The Geometry of Multiple Images: The Laws That Govern The Formation of Images of A Scene and Some of Their Applications
On the consistency of instantaneous rigid motion estimation
International Journal of Computer Vision
The Role of Total Least Squares in Motion Analysis
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
Removal of Translation Bias when Using Subspace Methods
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Revisiting Hartley's Normalized Eight-Point Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
FNS, CFNS and HEIV: A Unifying Approach
Journal of Mathematical Imaging and Vision
Computational Statistics & Data Analysis
Hi-index | 0.00 |
A recently proposed argument to explain the improved performance of the eight-point algorithm that results from using normalized data (Chojnacki, W., et al. in IEEE Trans. Pattern Anal. Mach. Intell. 25(9):1172---1177, 2003) relies upon adoption of a certain model for statistical data distribution. Under this model, the cost function that underlies the algorithm operating on the normalized data is statistically more advantageous than the cost function that underpins the algorithm using unnormalized data. Here we extend this explanation by introducing a more refined, structured model for data distribution. Under the extended model, the normalized eight-point algorithm turns out to be approximately consistent in a statistical sense. The proposed extension provides a link between the existing statistical rationalization of the normalized eight-point algorithm and the approach of Mühlich and Mester for enhancing total least squares estimation methods via equilibration. The paper forms part of a wider effort to rationalize and interrelate foundational methods in vision parameter estimation.