Image Interpretation Using Multiple Sensing Modalities

  • Authors:
  • Chen-Chau Chu;J. K. Aggarwal

  • Affiliations:
  • -;-

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 1992

Quantified Score

Hi-index 0.15

Visualization

Abstract

The AIMS (automatic interpretation using multiple sensors) system, which uses registered laser radar and thermal imagers, is discussed. Its objective is to detect and recognize man-made objects at kilometer range in outdoor scenes. The multisensor fusion approach is applied to four sensing modalities (range, intensity, velocity, and thermal) to improve both image segmentation and interpretation. Low-level attributes of image segments (regions) are computed by the segmentation modules and then converted to the KEE format. The knowledge-based interpretation modules are constructed using KEE and Lisp. AIMS applies forward chaining in a bottom-up fashion to derive object-level interpretations from databases generated by the low-level processing modules. The efficiency of the interpretaton process is enhanced by transferring nonsymbolic processing tasks to a concurrent service manager (program). A parallel implementation of the interpretation module is reported. Experimental results using real data are presented.