Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
An introduction to variational methods for graphical models
Learning in graphical models
Models of attention in computing and communication: from principles to applications
Communications of the ACM
Learning Belief Networks in the Presence of Missing Values and Hidden Variables
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Multivariate Information Bottleneck
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Learning Hidden Variable Networks: The Information Bottleneck Approach
The Journal of Machine Learning Research
The Bayesian structural EM algorithm
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Sequential update of Bayesian network structure
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Learning Bayesian networks with local structure
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Representing and modeling knowledge in the face of uncertainty has always been a challenge in artificial intelligence. Graphical models are an apt way of representing uncertainty, and hidden variables in this framework are a way of abstraction of the knowledge. It seems that hidden variables can represent concepts, which reveal the relation among the observed phenomena and capture their cause and effect relationship through structure learning. Our concern is mostly on concept learning of situated agents, which learn while living, and attend to important states to maximize their expected reward. Therefore, we present an algorithm for sequential learning of Bayesian networks with hidden variables. The proposed algorithm employs the recent advancements in learning hidden variable networks for the batch case, and utilizes a mixture of approaches that allows for sequential learning of parameters and structure of the network. The incremental nature of this algorithm facilitates gradual learning of an agent, through its lifetime, as data is gathered progressively. Furthermore inference is made possible, when facing a large corpus of data that cannot be handled as a whole.