Contour tracking using modified canny edge maps with level-of-detail

  • Authors:
  • Jihun Park

  • Affiliations:
  • Department of Computer Engineering, Hongik University, Seoul, Korea

  • Venue:
  • CAIP'05 Proceedings of the 11th international conference on Computer Analysis of Images and Patterns
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a simple but powerful method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background for the purpose of video background removal to capture motion in a scene. Our method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. We generated modified Canny edge maps by computing intensity derivatives in a normal direction of a previous frame contour to remove irrelevant edges. Computing Canny edge maps in the previous contour normal direction have effects of removing irrelevant edges. LOD Canny edge maps are generated by changing scale parameters for a given image. A simple (strong) Canny edge map, Scanny, has the smallest number of edge pixels while the most detailed Canny edge map, WcannyN, has the largest number of edge pixels. To reduce side-effects because of irrelevant edges, we start our basic tracking by using Scanny edges generated from large image intensity gradients of an input image. Starting from Scanny edges, we get more edge pixels ranging from simple Canny edge maps until the most detailed (weaker) Canny edge maps, called Wcanny maps along LOD hierarchy. LOD Canny edge pixels become nodes in routing, and LOD values of adjacent edge pixels determine routing costs between the nodes. We find the best route to follow Canny edge pixels favoring stronger Canny edge pixels. If Scanny edges are disconnected, routing between disconnected parts are planned using Wcanny edges in LOD hierarchy. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible contrary to other approaches of always combining the previous contour. Our experimental results show that this tracking approach is robust enough to handle a complex-textured scene.