Trace equivalence characterization through reinforcement learning

  • Authors:
  • Josée Desharnais;François Laviolette;Krishna Priya Darsini Moturu;Sami Zhioua

  • Affiliations:
  • IFT-GLO, Université Laval, Québec (QC), Canada;IFT-GLO, Université Laval, Québec (QC), Canada;IFT-GLO, Université Laval, Québec (QC), Canada;IFT-GLO, Université Laval, Québec (QC), Canada

  • Venue:
  • AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the context of probabilistic verification, we provide a new notion of trace-equivalence divergence between pairs of Labelled Markov processes. This divergence corresponds to the optimal value of a particular derived Markov Decision Process. It can therefore be estimated by Reinforcement Learning methods. Moreover, we provide some PAC-guarantees on this estimation.