Parallel hypothesis driven video content analysis

  • Authors:
  • Ole-Christoffer Granmo

  • Affiliations:
  • Agder University College, Grooseveien 36, Grimstad, Norway

  • Venue:
  • Proceedings of the 2004 ACM symposium on Applied computing
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Extraction of features from images, followed by pattern classification, is a promising approach to automatic video analysis. However, a parallel processing environment is typically required for real-time performance. Still, single-CPU Bayesian network systems for hypothesis driven feature extraction have been able to classify image content real-time --- the expected information value and processing cost of features are measured, and only efficient features are extracted. The goal in this paper is to combine the processing benefits of parallel and hypothesis driven approaches. We use dynamic Bayesian networks to specify video analysis tasks and the particle filter (PF) for approximate inference, i.e., feature selection and classification. The inference accuracy of any given PF is determined by the number of particles it maintains. To increase the number of particles maintained without reducing the processing rate, we apply multiple PFs distributed in a LAN, and a pooling system to coordinate their output. Our resulting multi-PF architecture supports three video frame processing phases: a parallelized feature selection phase, followed by a parallelized feature extraction- and classification phase. Unfortunately, we observe a loss of inference accuracy when splitting a single PF into multiple independent PFs. To reduce this loss, we let the pooled PFs exchange particles across the LAN. An object tracking simulation demonstrates the ability of our architecture to select efficient features as well as the effectiveness of our particle exchange scheme --- we observe a significant increase in inference accuracy compared to the tested non-parallel PF.