Concept learning for EL++ by refinement and reinforcement

  • Authors:
  • Mahsa Chitsaz;Kewen Wang;Michael Blumenstein;Guilin Qi

  • Affiliations:
  • School of Information and Communication Technology, Griffith University, Australia, School of Computer Science and Engineering, Southeast University, China;School of Information and Communication Technology, Griffith University, Australia, School of Computer Science and Engineering, Southeast University, China;School of Information and Communication Technology, Griffith University, Australia, School of Computer Science and Engineering, Southeast University, China;School of Information and Communication Technology, Griffith University, Australia, School of Computer Science and Engineering, Southeast University, China

  • Venue:
  • PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ontology construction in OWL is an important and yet time-consuming task even for knowledge engineers and thus a (semi-) automatic approach will greatly assist in constructing ontologies. In this paper, we propose a novel approach to learning concept definitions in $\ensuremath{\ensuremath{\cal E}\ensuremath{\cal L}^{++}} $ from a collection of assertions. Our approach is based on both refinement operator in inductive logic programming and reinforcement learning algorithm. The use of reinforcement learning significantly reduces the search space of candidate concepts. Besides, we present an experimental evaluation of constructing a family ontology. The results show that our approach is competitive with an existing learning system for $\ensuremath{\cal E}\ensuremath{\cal L}$.