On the extraction of 3D models from airborne video sensors for geolocation

  • Authors:
  • Tristrom Cooke;Robert Whatmough;Nicholas J. Redding;Gary Ewing;Edwin El-Mahassni

  • Affiliations:
  • ISR Division, Defence Science and Technology Organisation, Edinburgh, SA 5111, Australia;ISR Division, Defence Science and Technology Organisation, Edinburgh, SA 5111, Australia;ISR Division, Defence Science and Technology Organisation, Edinburgh, SA 5111, Australia;ISR Division, Defence Science and Technology Organisation, Edinburgh, SA 5111, Australia;ISR Division, Defence Science and Technology Organisation, Edinburgh, SA 5111, Australia

  • Venue:
  • Digital Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Geolocation of a feature in a video sequence collected from a moving platform is an activity that must be undertaken in video exploitation, especially surveillance and reconnaissance applications. Examples of sensor systems that are the focus of this work include manned and unmanned aerial vehicles. The approach described here uses positional information from three sources to compute refined coordinates in three dimensions for any feature in the video sequence. These three sources are: first, sensor-platform metadata describing the likely sensor footprint based on sensor-platform positional and attitudinal information; second, 3D information of a scene inherent in a video sequence collected from a moving platform; and third, reference imagery of the region of interest that is geolocated and georectified such as aerial photography. We describe the overall steps involved in this process and the progress made to date.