Time-to-Collision Estimation from Motion Based on Primate Visual Processing

  • Authors:
  • John M. Galbraith;Garrett T. Kenyon;Richard W. Ziolkowski

  • Affiliations:
  • IEEE;-;IEEE

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.14

Visualization

Abstract

A population coded algorithm, built on established models of motion processing in the primate visual system, computes the time-to-collision of a mobile robot to real-world environmental objects from video imagery. A set of four transformations starts with motion energy, a spatiotemporal frequency based computation of motion features. The following processing stages extract image velocity features similar to, but distinct from, optic flow; "translation驴 features, which account for velocity errors including those resulting from the aperture problem; and finally, estimate the time-to-collision. Biologically motivated population coding distinguishes this approach from previous methods based on optic flow. A comparison of the population coded approach with the popular optic flow algorithm of Lucas and Kanade against three types of approaching objects shows that the proposed method produces more robust time-to-collision information from a real world input stimulus in the presence of the aperture problem and other noise sources. The improved performance comes with increased computational cost, which would ideally be mitigated by special purpose hardware architectures.