Connectionist learning of belief networks
Artificial Intelligence
An introduction to variational methods for graphical models
Learning in graphical models
A mean field learning algorithm for unsupervised neural networks
Learning in graphical models
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
Reinforcement learning for factored markov decision processes
Reinforcement learning for factored markov decision processes
Dynamic bayesian networks: representation, inference and learning
Dynamic bayesian networks: representation, inference and learning
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Inducing history representations for broad coverage statistical parsing
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Discriminative training of a neural network statistical parser
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Hidden-variable models for discriminative reranking
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Dependency parsing with dynamic Bayesian network
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
Online graph planarisation for synchronous parsing of semantic and syntactic dependencies
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Structured prediction with reinforcement learning
Machine Learning
Bayesian network automata for modelling unbounded structures
IWPT '11 Proceedings of the 12th International Conference on Parsing Technologies
Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model
Computational Linguistics
Hi-index | 0.00 |
We propose a class of graphical models appropriate for structure prediction problems where the model structure is a function of the output structure. Incremental Sigmoid Belief Networks (ISBNs) avoid the need to sum over the possible model structures by using directed arcs and incrementally specifying the model structure. Exact inference in such directed models is not tractable, but we derive two efficient approximations based on mean field methods, which prove effective in artificial experiments. We then demonstrate their effectiveness on a benchmark natural language parsing task, where they achieve state-of-the-art accuracy. Also, the model which is a closer approximation to an ISBN has better parsing accuracy, suggesting that ISBNs are an appropriate abstract model of structure prediction tasks.