State-Based Modeling and Object Extraction From Echocardiogram Video

  • Authors:
  • A. Roy;S. Sural;J. Mukherjee;A. K. Majumdar

  • Affiliations:
  • Sch. of Inf. Technol., IIT Kharagpur, Kharagpur;-;-;-

  • Venue:
  • IEEE Transactions on Information Technology in Biomedicine
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a hierarchical state-based model for representing an echocardiogram video. It captures the semantics of video segments from dynamic characteristics of objects present in each segment. Our objective is to provide an effective method for segmenting an echo video into view, state, and substate levels. This is motivated by the need for building efficient indexing tools to support better content management. The modeling is done using four different views, namely, short axis, long axis, apical four chamber, and apical two chamber. For view classification, an artificial neural network is trained with the histogram of a region of interest of each video frame. Object states are detected with the help of synthetic M-mode images. In contrast to traditional single M-mode, we present a novel approach named sweep M-mode for state detection. We also introduce radial M-mode for substate identification from color flow Doppler 2-D imaging. The video model described here represents the semantics of video segments using first-order predicates. Suitable operators have been defined for querying the segments. We have carried out experiments on 20 echo videos and compared the results with manual annotation done by two experts. View classification accuracy is 97.19%. Misclassification error of the state detection stage is less than 13%, which is within acceptable range since only frames at the state boundaries are found to be misclassified.