Rule-Based Semantic Concept Classification from Large-Scale Video Collections

  • Authors:
  • Lin Lin;Mei-Ling Shyu;Shu-Ching Chen

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA;Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA;School of Computing and Information Sciences, Florida International University, Miami, FL, USA

  • Venue:
  • International Journal of Multimedia Data Engineering & Management
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The explosive growth and increasing complexity of the multimedia data have created a high demand of multimedia services and applications in various areas so that people can access and distribute the data easily. Unfortunately, traditional keyword-based information retrieval is no longer suitable. Instead, multimedia data mining and content-based multimedia information retrieval have become the key technologies in modern societies. Among many data mining techniques, association rule mining ARM is considered one of the most popular approaches to extract useful information from multimedia data in terms of relationships between variables. In this paper, a novel rule-based semantic concept classification framework using weighted association rule mining WARM, capturing the significance degrees of the feature-value pairs to improve the applicability of ARM, is proposed to deal with major issues and challenges in large-scale video semantic concept classification. Unlike traditional ARM that the rules are generated by frequency count and the items existing in one rule are equally important, our proposed WARM algorithm utilizes multiple correspondence analysis MCA to explore the relationships among features and concepts and to signify different contributions of the features in rule generation. To the authors best knowledge, this is one of the first WARM-based classifiers in the field of multimedia concept retrieval. The experimental results on the benchmark TRECVID data demonstrate that the proposed framework is able to handle large-scale and imbalanced video data with promising classification and retrieval performance.