Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training products of experts by minimizing contrastive divergence
Neural Computation
Variational Approximations between Mean Field Theory and the Junction Tree Algorithm
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
MRF parameter estimation by an accelerated method
Pattern Recognition Letters
Using a Markov network model in a univariate EDA: an empirical cost-benefit analysis
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Recovering temporally rewiring networks: a model-based approach
Proceedings of the 24th international conference on Machine learning
Herding dynamical weights to learn
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
A new dual wing harmonium model for document retrieval
Pattern Recognition
A fully multivariate DEUM algorithm
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
A novel dual wing harmonium model aided by 2-D wavelet transform subbands for document data mining
Expert Systems with Applications: An International Journal
Herding dynamic weights for partially observed random field models
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Tackling the DREAM challenge for gene regulatory networks reverse engineering
AI*IA'11 Proceedings of the 12th international conference on Artificial intelligence around man and beyond
Conditional graphical models for protein structure prediction
Conditional graphical models for protein structure prediction
Some aspects of latent structure analysis
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
A Markovianity based optimisation algorithm
Genetic Programming and Evolvable Machines
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Monte Carlo MCMC: efficient inference by approximate sampling
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
Bayesian learning in undirected graphical models---computing posterior distributions over parameters and predictive quantities---is exceptionally difficult. We conjecture that for general undirected models, there are no tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct equilibrium distribution over parameters. While this intractability, due to the partition function, is familiar to those performing parameter optimisation, Bayesian learning of posterior distributions over undirected model parameters has been unexplored and poses novel challenges. We propose several approximate MCMC schemes and test on fully observed binary models (Boltzmann machines) for a small coronary heart disease data set and larger artificial systems. While approximations must perform well on the model, their interaction with the sampling scheme is also important. Samplers based on variational mean-field approximations generally performed poorly, more advanced methods using loopy propagation, brief sampling and stochastic dynamics lead to acceptable parameter posteriors. Finally, we demonstrate these techniques on a Markov random field with hidden variables.