Bridging low-level features and high-level semantics via fMRI brain imaging for video classification

  • Authors:
  • Xintao Hu;Fan Deng;Kaiming Li;Tuo Zhang;Hanbo Chen;Xi Jiang;Jinglei Lv;Dajiang Zhu;Carlos Faraco;Degang Zhang;Arsham Mesbah;Junwei Han;Xiansheng Hua;Li Xie;Stephen Miller;Lei Guo;Tianming Liu

  • Affiliations:
  • Northwestern Polytechnical University, Xi'an, China;the University of Georgia, Athens, GA, USA;Northwestern Polytechnical University, Xi'an, China;Northwestern Polytechnical University, Xi'an, China;Northwestern Polytechnical University, Xi'an, China;Northwestern Polytechnical University, Xi'an, China;Northwestern Polytechnical University, Xi'an, China;the University of Georgia, Athens, GA, USA;the University of Georgia, Athens, GA, USA;Northwestern Polytechnical University, Xi'an, China;the University of Georgia, Athens, GA, USA;Northwestern Polytechnical University, Xi'an, China;Microsoft Research Asia, Beijing, China;Zhejiang University, Zhejiang, China;the University of Georgia, Athens, GA, USA;Northwestern Polytechnical University, Xi'an, China;the University of Georgia, Athens, GA, USA

  • Venue:
  • Proceedings of the international conference on Multimedia
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The multimedia content analysis community has made significant effort to bridge the gap between low-level features and high-level semantics perceived by human cognitive systems such as real-world objects and concepts. In the two fields of multimedia analysis and brain imaging, both topics of low-level features and high level semantics are extensively studied. For instance, in the multimedia analysis field, many algorithms are available for multimedia feature extraction, and benchmark datasets are available such as the TRECVID. In the brain imaging field, brain regions that are responsible for vision, auditory perception, language, and working memory are well studied via functional magnetic resonance imaging (fMRI). This paper presents our initial effort in marrying these two fields in order to bridge the gaps between low-level features and high-level semantics via fMRI brain imaging. Our experimental paradigm is that we performed fMRI brain imaging when university student subjects watched the video clips selected from the TRECVID datasets. At current stage, we focus on the three concepts of sports, weather, and commercial-/advertisement specified in the TRECVID 2005. Meanwhile, the brain regions in vision, auditory, language, and working memory networks are quantitatively localized and mapped via task-based paradigm fMRI, and the fMRI responses in these regions are used to extract features as the representation of the brain's comprehension of semantics. Our computational framework aims to learn the most relevant low-level feature sets that best correlate the fMRI-derived semantics based on the training videos with fMRI scans, and then the learned models are applied to larger scale test datasets without fMRI scans for category classifications. Our result shows that: 1) there are meaningful couplings between brain's fMRI responses and video stimuli, suggesting the validity of linking semantics and low-level features via fMRI; 2) The computationally learned low-level feature sets from fMRI-derived semantic features can significantly improve the classification of video categories in comparison with that based on original low-level features.