Leveraging domain knowledge to learn normative behavior: a bayesian approach

  • Authors:
  • Hadi Hosseini;Mihaela Ulieru

  • Affiliations:
  • David R. Cheriton School of Computer Science, University of Waterloo, Canada;Adaptive Risk Management Lab, University of New Brunswick, Canada

  • Venue:
  • ALA'11 Proceedings of the 11th international conference on Adaptive and Learning Agents
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper addresses the problem of norm adaptation using Bayesian reinforcement learning. We are concerned with the effectiveness of adding prior domain knowledge when facing environments with different settings as well as with the speed of adapting to a new environment. Individuals develop their normative framework via interaction with their surrounding environment (including other individuals). An agent acquires the domain-dependent knowledge in a certain environment and later reuses them in different settings. This work is novel in that it represents normative behaviors as probabilities over belief sets. We propose a two-level learning framework to learn the values of normative actions and set them as prior knowledge, when agents are confident about them, to feed them back to their belief sets. Developing a prior belief set about a certain domain can improve an agent's learning process to adjust its norms to the new environment's dynamics. Our evaluation shows that a normative agent, having been trained in an initial environment, is able to adjust its beliefs about the dynamics and behavioral norms in a new environment. Therefore, it converges to the optimal policy more quickly, especially in the early stages of learning.