2D Affine Transformations Cannot Account for Human 3D Object Recognition

  • Authors:
  • Zili Liu;Daniel Kersten

  • Affiliations:
  • -;-

  • Venue:
  • ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Converging evidence has shown that human object recognition depends on observers' familiarity with objects' appearance. The more similar the objects are the stronger this dependence will be, and the more important two-dimensional (2D) image information will be. The degree to which 3D structural information is used, however, remains an area of strong debate. Previously, we showed that all models that allow rotations in the image plane of independent 2D templates could not account for human performance in discriminating novel object views [3].We now present results from models of generalized radial basis functions(GRBF), 2D nearest neighbor matching that allows 2D affine transformations, and a Bayesian statistical estimator that integrates over all possible 2D affine transformations. The performance of the human observers relative to each of the models is better for the novel views than for the template views, suggesting that humans generalize better to novel views from template views. The Bayesian estimator yields the optimal performance with 2D affine transformations and independent 2D templates. Therefore, no models of 2D affine operations with independent 2D templates account for the human performance.