Variational learning for rectified factor analysis
Signal Processing
Nonlinear underdetermined blind signal separation using Bayesian neural network approach
Digital Signal Processing
Building Blocks for Variational Bayesian Learning of Latent Variable Models
The Journal of Machine Learning Research
Blind separation of nonlinear mixtures by variational Bayesian learning
Digital Signal Processing
A fast algorithm for robust mixtures in the presence of measurement errors
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.