Many-to-many matching of scale-space feature hierarchies using metric embedding

  • Authors:
  • M. Fatih Demirci;Ali Shokoufandeh;Yakov Keselman;Sven Dickinson;Lars Bretzner

  • Affiliations:
  • Department of Computer Science, Drexel University, Philadelphia, PA;Department of Computer Science, Drexel University, Philadelphia, PA;School of Computer Science, Telecommunications and Information Systems, DePaul University, Chicago, IL;Department of Computer Science, University of Toronto, Toronto, Ontario, Canada;Computational Vision and Active Perception Laboatory, Department of Numerical Analysis and Computer Science, KTH, Stockholm, Sweden

  • Venue:
  • Scale Space'03 Proceedings of the 4th international conference on Scale space methods in computer vision
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scale-space feature hierarchies can be conveniently represented as graphs, in which edges are directed from coarser features to finer features. Consequently, feature matching (or view-based object matching) can be formulated as graph matching. Most approaches to graph matching assume a one-to-one correspondence between nodes (features) which, due to noise, scale discretization, and feature extraction errors, is overly restrictive. In general, a subset of features in one hierarchy, representing an abstraction of those features, may best match a subset of features in another. We present a framework for the many-to-many matching of multi-scale feature hierarchies, in which features and their relations are captured in a vertex-labeled, edge-weighted graph. The matching algorithm is based on a metric-tree representation of labeled graphs and their low-distortion metric embedding into normed vector spaces. This two-step transformation reduces the many-to-many graph matching problem to that of computing a distribution-based distance measure between two such embeddings. To compute the distance between two sets of embedded, weighted vectors, we use the Earth Mover's Distance under transformation. To demonstrate the approach, we target the domain of multi-scale, qualitative shape description, in which an image is decomposed into a set of blobs and ridges with automatic scale selection. We conduct an extensive set of view-based matching trials, and compare the results favorably to matching under a one-to-one assumption.