Robust multi-view boosting with priors

  • Authors:
  • Amir Saffari;Christian Leistner;Martin Godec;Horst Bischof

  • Affiliations:
  • Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria;Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria;Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria;Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many learning tasks for computer vision problems can be described by multiple views or multiple features. These views can be exploited in order to learn from unlabeled data, a.k.a. "multi-view learning". In these methods, usually the classifiers iteratively label each other a subset of the unlabeled data and ignore the rest. In this work, we propose a new multi-view boosting algorithm that, unlike other approaches, specifically encodes the uncertainties over the unlabeled samples in terms of given priors. Instead of ignoring the unlabeled samples during the training phase of each view, we use the different views to provide an aggregated prior which is then used as a regularization term inside a semisupervised boosting method. Since we target multi-class applications, we first introduce a multi-class boosting algorithm based on maximizing the mutli-class classification margin. Then, we propose our multi-class semisupervised boosting algorithm which is able to use priors as a regularization component over the unlabeled data. Since the priors may contain a significant amount of noise, we introduce a new loss function for the unlabeled regularization which is robust to noisy priors. Experimentally, we show that the multi-class boosting algorithms achieves state-of-theart results in machine learning benchmarks. We also show that the new proposed loss function is more robust compared to other alternatives. Finally, we demonstrate the advantages of our multi-view boosting approach for object category recognition and visual object tracking tasks, compared to other multi-view learning methods.