Brain state decoding for rapid image retrieval

  • Authors:
  • Jun Wang;Eric Pohlmeyer;Barbara Hanna;Yu-Gang Jiang;Paul Sajda;Shih-Fu Chang

  • Affiliations:
  • Columbia University, New York, NY, USA;Columbia University, New York, NY, USA;Meridian Vision, Princeton, NJ, USA;Columbia University, New York, NY, USA;Columbia University, New York, NY, USA;Columbia University, New York, NY, USA

  • Venue:
  • MM '09 Proceedings of the 17th ACM international conference on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human visual perception is able to recognize a wide range of targets under challenging conditions, but has limited throughput. Machine vision and automatic content analytics can process images at a high speed, but suffers from inadequate recognition accuracy for general target classes. In this paper, we propose a new paradigm to explore and combine the strengths of both systems. A single trial EEG-based brain machine interface (BCI) subsystem is used to detect objects of interest of arbitrary classes from an initial subset of images. The EEG detection outcomes are used as input to a graph-based pattern mining subsystem to identify, refine, and propagate the labels to retrieve relevant images from a much larger pool. The combined strategy is unique in its generality, robustness, and high throughput. It has great potential for advancing the state of the art in media retrieval applications. We have evaluated and demonstrated significant performance gains of the proposed system with multiple and diverse image classes over several data sets, including those from Internet (Caltech 101) and remote sensing images. In this paper, we will also present insights learned from the experiments and discuss future research directions.