Detecting Informative Frames from Wireless Capsule Endoscopic Video Using Color and Texture Features

  • Authors:
  • Md. Khayrul Bashar;Kensaku Mori;Yasuhito Suenaga;Takayuki Kitasaka;Yoshito Mekada

  • Affiliations:
  • Graduate School of Engineering, Nagoya University, Japan and MEXT Innovative Research Center for Preventive Medical Engineering, Nagoya University, Japan;Graduate School of Information Science, Nagoya University, Japan and MEXT Innovative Research Center for Preventive Medical Engineering, Nagoya University, Japan;Graduate School of Information Science, Nagoya University, Japan and MEXT Innovative Research Center for Preventive Medical Engineering, Nagoya University, Japan;Graduate School of Information Science, Nagoya University, Japan and MEXT Innovative Research Center for Preventive Medical Engineering, Nagoya University, Japan;MEXT Innovative Research Center for Preventive Medical Engineering, Nagoya University, Japan and School of Life System Science and Technology, Chukyo University, Toyota, Japan

  • Venue:
  • MICCAI '08 Proceedings of the 11th International Conference on Medical Image Computing and Computer-Assisted Intervention, Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite emerging technology, wireless capsule endoscopy needs high amount of diagnosis-time due to the presence of many useless frames, created by turbid fluids, foods, and faecal materials. These materials and fluids present a wide range of colors and/or bubble-like texture patterns. We, therefore, propose a cascade method for informative frame detection, which uses local color histogram to isolate highly contaminated non-bubbled (HCN) frames, and Gauss Laguerre Transform (GLT) based multiresolution norm-1 energy feature to isolate significantly bubbled (SB) frames. Supervised support vector machine is used to classify HCN frames (Stage-1), while automatic bubble segmentation followed by threshold operation(Stage-2) is adopted to detect informative frames by isolating SB frames. An experiment with 20,558 frames from the three videos shows 97.48 % average detection accuracy by the proposed method, when compared with methods adopting Gabor based-(75.52%) and discrete wavelet based features (63.15%) with the same color feature.