Integrating knowledge capture and supervised learning through a human-computer interface
Proceedings of the sixth international conference on Knowledge capture
Imitation learning in relational domains: a functional-gradient boosting approach
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
An analysis of how ensembles of collective classifiers improve predictions in graphs
Proceedings of the 21st ACM international conference on Information and knowledge management
Lifted online training of relational models with stochastic gradient methods
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Transforming graph data for statistical relational learning
Journal of Artificial Intelligence Research
KnowRob: A knowledge processing infrastructure for cognition-enabled robots
International Journal of Robotics Research
Computers in Biology and Medicine
Hi-index | 0.00 |
Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.