Image Modeling with Position-Encoding Dynamic Trees

  • Authors:
  • Amos J. Storkey;Christopher K. I. Williams

  • Affiliations:
  • -;-

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2003

Quantified Score

Hi-index 0.14

Visualization

Abstract

This paper describes the Position-Encoding Dynamic Tree (PEDT). The PEDT is a probabilistic model for images that improves on the dynamic tree by allowing the positions of objects to play a part in the model. This increases the flexibility of the model over the dynamic tree and allows the positions of objects to be located and manipulated. This paper motivates and defines this form of probabilistic model using the belief network formalism. A structured variational approach for inference and learning in the PEDT is developed, and the resulting variational updates are obtained, along with additional implementation considerations that ensure the computational cost scales linearly in the number of nodes of the belief network. The PEDT model is demonstrated and compared with the dynamic tree and fixed tree. The structured variational learning method is compared with mean field approaches.