Feature Mapping and View Planning with Localized Surface Parameters

  • Authors:
  • Xiaobu Yuan;Siwei Lu

  • Affiliations:
  • Department of Computer Science, Memorial University of Newfoundland, St. John‘/s, NF, A1B 3X5/ E-mail: yuan@cs.mun.ca, swlu@cs.mun.ca;Department of Computer Science, Memorial University of Newfoundland, St. John‘/s, NF, A1B 3X5/ E-mail: yuan@cs.mun.ca, swlu@cs.mun.ca

  • Venue:
  • Journal of Mathematical Imaging and Vision
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Object recognition is imperative in industry automation since it empowers robots with the perceptual capability of understanding the three-dimensional(3-D) environment by means of sensory devices. Considering object recognition as a mapping between object models and a partial description of an object, this paper introduces a three-phase filtering method that eliminates candidate models when their differences with the object show up. Throughout the process, a view-insensitive modeling method, namely localized surface parameters, is employed.Surface matching is carried out in the first phase to match modelswith the object by comparing their localized surface descriptions. Amodel is a candidate of the object only if every object surfacematches locally with at least one of the model surfaces. Since thetopological relationship between surfaces specifies the global shapeof the object and models, it is then checked in the next phase withlocal coordinate systems to make sure that a candidate model has theidentical structure as the object.Because the information of an object cannot be complete in a singleviewing direction, the first two conditions can only determine if acandidate has the same portion as the object. The selected model maystill be bigger than the object. To avoid the part-to-wholeconfusion, in the third phase, a back projection from candidatemodels is performed to ensure that no unmatched model features becomevisible when a model is virtually brought to the object‘sorientation.In case multiple models are selected as a result of the insufficientinformation, disambiguating features and their visible directionsare derived to verify the expected feature. In addition to the viewindependent object recognition under even ambiguous situations, thefiltering method has a low computational complexity upper bounded byO(m^2n^2) and lower bounded by O(mn),where m and n are the numbers of model and object features. The three-phase objectrecognition has been exercised with real and synthesized rangeimages. Experiment results are given in the paper.