Integrating visual information across camera movements with a visual-motor calibration map

  • Authors:
  • Peter N. Prokopowicz;Paul R. Cooper

  • Affiliations:
  • Department of Computer Science, University of Chicago, Chicago, IL;Department of Computer Science, Northwestern University, Evanston, IL

  • Venue:
  • AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Facing the competing demands for wider field of view and higher spatial resolution, computer vision will evolve toward greater use of foveal sensors and frequent camera movements. Integration of visual information across movements becomes a fundamental problem. We show that integration is possible using a biologically-inspired representation we call the visual-motor calibration map. The map is a memory-based model of the relationship between camera movements and corresponding pixel locations before and after any movement. The map constitutes a self-calibration that can compensate for non-uniform sampling, lens distortion, mechanical misalignments, and arbitrary pixel reordering. Integration takes place entirely in a retinotopic frame, using a short-term, predictive visual memory.