Lifted online training of relational models with stochastic gradient methods

  • Authors:
  • Babak Ahmadi;Kristian Kersting;Sriraam Natarajan

  • Affiliations:
  • Knowledge Discovery Department, Fraunhofer IAIS, Sankt Augustin, Germany;Knowledge Discovery Department, Fraunhofer IAIS, Sankt Augustin, Germany, Institute of Geodesy and Geoinformation, University of Bonn, Bonn, Germany, School of Medicine, Wake Forest University, Wi ...;School of Medicine, Wake Forest University, Winston-Salem

  • Venue:
  • ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recombine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for online training for relational models. Specifically, we develop the first lifted stochastic gradient optimization method with gain vector adaptation, which processes each lifted piece one after the other. On several datasets, the resulting optimizer converges to the same quality solution over an order of magnitude faster, simply because unlike batch training it starts optimizing long before having seen the entire mega-example even once.