Natural gradient works efficiently in learning
Neural Computation
Training products of experts by minimizing contrastive divergence
Neural Computation
Adaptive Bayesian Logic Programs
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
Learning probabilistic models of link structure
The Journal of Machine Learning Research
Machine Learning
Accelerated training of conditional random fields with stochastic gradient methods
ICML '06 Proceedings of the 23rd international conference on Machine learning
Parameter learning for relational Bayesian networks
Proceedings of the 24th international conference on Machine learning
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Learning Markov logic network structure via hypergraph lifting
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning first-order probabilistic models with combining rules
Annals of Mathematics and Artificial Intelligence
Mapping and revising Markov logic networks for transfer learning
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Lifted first-order belief propagation
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Parameter learning of logic programs for symbolic-statistical modeling
Journal of Artificial Intelligence Research
Piecewise training for structured prediction
Machine Learning
Probabilistic inductive logic programming: theory and applications
Probabilistic inductive logic programming: theory and applications
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Learning relations by pathfinding
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Learning the parameters of probabilistic logic programs from interpretations
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
A new class of upper bounds on the log partition function
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Learning Markov Logic Networks via Functional Gradient Boosting
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
Multi-evidence lifted message passing, with application to PageRank and the Kalman filter
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.00 |
Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recombine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for online training for relational models. Specifically, we develop the first lifted stochastic gradient optimization method with gain vector adaptation, which processes each lifted piece one after the other. On several datasets, the resulting optimizer converges to the same quality solution over an order of magnitude faster, simply because unlike batch training it starts optimizing long before having seen the entire mega-example even once.