Learning to recognize objects from unseen modalities

  • Authors:
  • C. Mario Christoudias;Raquel Urtasun;Mathieu Salzmann;Trevor Darrell

  • Affiliations:
  • UC Berkeley, EECS & ICSI;TTI Chicago;UC Berkeley, EECS & ICSI;UC Berkeley, EECS & ICSI

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we investigate the problem of exploiting multiple sources of information for object recognition tasks when additional modalities that are not present in the labeled training set are available for inference. This scenario is common to many robotics sensing applications and is in contrast with the assumption made by existing approaches that require at least some labeled examples for each modality. To leverage the previously unseen features, we make use of the unlabeled data to learn a mapping from the existing modalities to the new ones. This allows us to predict the missing data for the labeled examples and exploit all modalities using multiple kernel learning. We demonstrate the effectiveness of our approach on several multi-modal tasks including object recognition from multi-resolution imagery, grayscale and color images, as well as images and text. Our approach outperforms multiple kernel learning on the original modalities, as well as nearest-neighbor and bootstrapping schemes.