SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects
IEEE Transactions on Pattern Analysis and Machine Intelligence
Link-based and content-based evidential information in a belief network model
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Comparing Images Using the Hausdorff Distance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image thresholding using Tsallis entropy
Pattern Recognition Letters
Fast and automatic video object segmentation and tracking for content-based applications
IEEE Transactions on Circuits and Systems for Video Technology
Automatic moving object extraction for content-based applications
IEEE Transactions on Circuits and Systems for Video Technology
Online PCA with adaptive subspace method for real-time hand gesture learning and recognition
WSEAS Transactions on Computers
Hi-index | 0.00 |
In Computational Vision, object tracking in a sequence of frames is one of the most important problems. Among the most used approaches there is the model-target one, which matches a model object against a candidate target region in a frame sequence. To accomplish this task, the Hausdorff distance has an attractiveness due to its simplicity of implementation and possibility of matching between two sets with different cardinality. Viewing images as non-extensive systems, we may apply the Tsallis Entropy (which works with only one parameter, called entropic parameter) to segment the frames in order to find the target object. In this work we propose a methodology which combines Hausdorff distance, Bayesian network, HSV histogram and Tsallis non-extensive entropy for objects recognition and tracking in a frame sequence. With this proposal, we reduce the Hausdorff's noise sensitive and the high parameter dependence of the tracking task. We apply our method in experiments with one object over a moving background in a sequence of 300 frames.