A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Deformation Models for Image Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic medical image annotation in ImageCLEF 2007: Overview, results, and discussion
Pattern Recognition Letters
Discriminative cue integration for medical image annotation
Pattern Recognition Letters
Deformations, patches, and discriminative models for automatic annotation of medical radiographs
Pattern Recognition Letters
Active Scheduling of Organ Detection and Segmentation in Whole-Body Medical Images
MICCAI '08 Proceedings of the 11th international conference on Medical Image Computing and Computer-Assisted Intervention - Part I
Automatic image hanging protocol for chest radiographs in PACS
IEEE Transactions on Information Technology in Biomedicine
MCBR-CDS'09 Proceedings of the First MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support
Hi-index | 0.00 |
In this paper, we propose a learning-based algorithm for automatic medical image annotation based on sparse aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating an almost perfect performance of 99.98% for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% [1]. Our approach also achieved the best accuracies for a three-class and a multi-class radiograph annotation task, when compared with other state of the art algorithms. Our algorithm has been integrated into an advanced image visualization workstation, enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for PA-AP chest images.