Random ensemble metrics for object recognition

  • Authors:
  • Tatsuo Kozakaya;Satoshi Ito;Susumu Kubota

  • Affiliations:
  • Corporate Research and Development Center, Toshiba Corporation, 1, Komukai-Toshiba-cho, Saiwai-ku, Kawasaki, 212-8582, Japan;Corporate Research and Development Center, Toshiba Corporation, 1, Komukai-Toshiba-cho, Saiwai-ku, Kawasaki, 212-8582, Japan;Corporate Research and Development Center, Toshiba Corporation, 1, Komukai-Toshiba-cho, Saiwai-ku, Kawasaki, 212-8582, Japan

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel and generic approach for metric learning, random ensemble metrics (REMetric). To improve generalization performance, we introduce the concept of ensemble learning to the metric learning scheme. Unlike previous methods, our method does not optimize the global objective function for the whole training data. It learns multiple discriminative projection vectors obtained from linear support vector machines (SVM) using randomly subsampled training data. The final metric matrix is then obtained by integrating these vectors. As a result of using SVM, the learned metric has an excellent scalability for the dimensionality of features. Therefore, it does not require any prior dimensionality reduction techniques such as PCA. Moreover, our method allows us to unify dimensionality reduction and metric learning by controlling the number of the projection vectors. We demonstrate through experiments, that our method can avoid overfitting even though a relatively small number of training data is provided. The experiments are performed with three different datasets; the Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset, the Labeled Face in the Wild (LFW) dataset and the Oxford 102 category flower dataset. The results show that our method achieves equivalent or superior performance compared to existing state-of-the-art metric learning methods.