Multi-spectral fusion for surveillance systems

  • Authors:
  • Simon Denman;Todd Lamb;Clinton Fookes;Vinod Chandran;Sridha Sridharan

  • Affiliations:
  • Image and Video Research Laboratory, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Australia;Image and Video Research Laboratory, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Australia;Image and Video Research Laboratory, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Australia;Image and Video Research Laboratory, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Australia;Image and Video Research Laboratory, Queensland University of Technology, GPO Box 2434, Brisbane 4001, Australia

  • Venue:
  • Computers and Electrical Engineering
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.