Removing Outliers Using The L\infty Norm

  • Authors:
  • Kristy Sim;Richard Hartley

  • Affiliations:
  • Australian National University;Australian National University, and National ICT Australia

  • Venue:
  • CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, there has been interest in solving geometric vision problems such as triangulation and camera resectioning using L\infty minimization. One key advantage of using the L\infty norm rather than the L2 norm is that the L\infty cost function has a single minimum unlike the commonly used L2 cost function which typically has multiple local minima. However, one drawback of using L\infty minimization is that it is not robust to outliers. By minimizing the L\infty norm instead of the L2 norm, we are, in essence, fitting the outliers and not the good data. Therefore, before one can perform L\infty optimization on a problem, it is first necessary to remove outliers. A popular (but generally unsound) method of removing outliers is to minimize the cost function using standard optimization techniques; then if the residual error is too great, remove the offending measurements and continue. Although this method can fail for simple L2 optimization problems, we show in this paper that for a wide class of L\infty problems it is a valid technique. It is proved that the set of measurements with greatest residual must contain at least one outlier. Thus, if we keep throwing out the measurements with greatest residual, we will eventually remove all outliers in the data. We test this hypothesis on the multiview reconstruction problem and show that even simple strategies for throwing out these maximum residual measurements are effective in removing outliers.