Inexpensive fusion methods for enhancing feature detection

  • Authors:
  • Peter Wilkins;Tomasz Adamek;Noel E. O'Connor;Alan F. Smeaton

  • Affiliations:
  • Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Dublin, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Dublin, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Dublin, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Dublin, Ireland

  • Venue:
  • Image Communication
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent successful approaches to high-level feature detection in image and video data have treated the problem as a pattern classification task. These typically leverage the techniques learned from statistical machine learning, coupled with ensemble architectures that create multiple feature detection models. Once created, co-occurrence between learned features can be captured to further boost performance. At multiple stages throughout these frameworks, various pieces of evidence can be fused together in order to boost performance. These approaches whilst very successful are computationally expensive, and depending on the task, require the use of significant computational resources. In this paper we propose two fusion methods that aim to combine the output of an initial basic statistical machine learning approach with a lower-quality information source, in order to gain diversity in the classified results whilst requiring only modest computing resources. Our approaches, validated experimentally on TRECVid data, are designed to be complementary to existing frameworks and can be regarded as possible replacements for the more computationally expensive combination strategies used elsewhere.