An EM/E-MRF algorithm for adaptive model based tracking in extremely poor visibility

  • Authors:
  • Rustam Stolkin;Alistair Greig;Mark Hodgetts;John Gilby

  • Affiliations:
  • Sira Technology Centre, South Hill, Chislehurst, Kent BR7 5EH, United Kingdom and Department of Mechanical Engineering, University College London, Torrington Place, London WC1E 7JE, United Kingdom ...;Department of Mechanical Engineering, University College London, Torrington Place, London WC1E 7JE, United Kingdom;Sira Technology Centre, South Hill, Chislehurst, Kent BR7 5EH, United Kingdom;Sira Technology Centre, South Hill, Chislehurst, Kent BR7 5EH, United Kingdom

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper addresses the problems of visual tracking in conditions of extremely poor visibility. The human visual system can often correctly interpret images that are of such poor quality that they contain insufficient explicit information to do so. We assert that such systems must therefore make use of prior knowledge in several forms. A tracking algorithm is presented which combines observed data (the current image) with predicted data derived from prior knowledge of the object being viewed and an estimate of the camera's motion. During image segmentation, a predicted image is used to estimate class conditional distribution models and an Extended-Markov Random Field technique is used to combine observed image data with expectations of that data within a probabilistic framework. Interpretations of scene content and camera position are then mutually improved using Expectation Maximisation. Models of background and tracked object are continually relearned and adapt iteratively with each new image frame. The algorithm is tested using real video sequences, filmed in poor visibility conditions with complete pre-measured ground-truth data.