Co-transduction for shape retrieval

  • Authors:
  • Xiang Bai;Bo Wang;Xinggang Wang;Wenyu Liu;Zhuowen Tu

  • Affiliations:
  • Department of Electronics and Information Engineering, Huazhong University of Science and Technology, China;Department of Electronics and Information Engineering, Huazhong University of Science and Technology, China;Department of Electronics and Information Engineering, Huazhong University of Science and Technology, China;Department of Electronics and Information Engineering, Huazhong University of Science and Technology, China;Lab of Neuro Imaging, University of California, Los Angeles

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new shape/object retrieval algorithm, co-transduction. The performance of a retrieval system is critically decided by the accuracy of adopted similarity measures (distances or metrics). Different types of measures may focus on different aspects of the objects: e.g. measures computed based on contours and skeletons are often complementary to each other. Our goal is to develop an algorithm to fuse different similarity measures for robust shape retrieval through a semi-supervised learning framework. We name our method co-transduction which is inspired by the co-training algorithm [1]. Given two similarity measures and a query shape, the algorithm iteratively retrieves the most similar shapes using one measure and assigns them to a pool for the other measure to do a re-ranking, and vice-versa. Using co-transduction, we achieved a significantly improved result of 97.72% on the MPEG-7 dataset [2] over the state-of-the-art performances (91% in [3], 93.4% in [4]). Our algorithm is general and it works directly on any given similarity measures/metrics; it is not limited to object shape retrieval and can be applied to other tasks for ranking/retrieval.