Bag-of-Features Based Classification of Breast Parenchymal Tissue in the Mammogram via Jointly Selecting and Weighting Visual Words

  • Authors:
  • Jingyan Wang;Yongping Li;Ying Zhang;Honglan Xie;Chao Wang

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • ICIG '11 Proceedings of the 2011 Sixth International Conference on Image and Graphics
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatically classifying the tissues types of region of interest (ROI) in medical imaging has been a important application in computer-aided diagnosis, such as classification of breast parenchymal tissue in the mammogram. Recently, bag-of-features method has show its power in this field, treating each medical image as a set of local features. In this paper, we investigate using the bag-of-features strategy to classify the tissue types in medical imaging applications. Two important issues are considered here: the visual vocabulary learning and weighting. Although there are already plenty of algorithms to deal with them, all of them treat them independently, namely, the vocabulary learned first and then the histogram weighted. Inspired by Auto-Context who learns the features and classier jointly, we try to develop a novel algorithm who learns the vocabulary and weights jointly. The new algorithm, called Joint-ViVo, works in a iterative way. In each iteration, we first learn the weights for each visual word by maximizing the margin of ROI triplets, and then based on the learned weights, we select the most discriminate visual words for the next iteration. We test our algorithm by classifying breast tissue density in mammograms. The results show that Joint-ViVo can perform effectively for classifying tissues and support the idea that vocabulary should be learned jointly with the weights.