Empirical Performance Evaluation Methodology and Its Application to Page Segmentation Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Probabilistic Hypothesis Generation for Rapid 3D Object Recognition
IWVF-4 Proceedings of the 4th International Workshop on Visual Form
A Method for Objective Edge Detection Evaluation and Detector Parameter Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Use of Error Propagation for Statistical Validation of Computer Vision Software
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluation for uncertain image classification and segmentation
Pattern Recognition
Image segmentation evaluation: A survey of unsupervised methods
Computer Vision and Image Understanding
Image Quality Assessment Based on Wavelet Coefficients Using Neural Network
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Measures for the evaluation of segmentation methods used in model based people tracking methods
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Statistical Methods and Models for Video-Based Tracking, Modeling, and Recognition
Foundations and Trends in Signal Processing
A new measurement for assessing polygonal approximation of curves
Pattern Recognition
Edge Drawing: A combined real-time edge and segment detector
Journal of Visual Communication and Image Representation
Quantitative error measures for edge detection
Pattern Recognition
Hi-index | 0.01 |
We present a methodology for the quantitative performance evaluation of detection algorithms in computer vision. A common method is to generate a variety of input images by varying the image parameters and evaluate the performance of the algorithm, as algorithm parameters vary. Operating curves that relate the probability of misdetection and false alarm are generated for each parameter setting. Such an analysis does not integrate the performance of the numerous operating curves. We outline a methodology for summarizing many operating curves into a few performance curves. This methodology is adapted from the human psychophysics literature and is general to any detection algorithm. The central concept is to measure the effect of variables in terms of the equivalent effect of a critical signal variable, which in turn facilitates the determination of the breakdown point of the algorithm. We demonstrate the methodology by comparing the performance of two-line detection algorithms