TR2017-049

Value-Aware Loss Function for Model-based Reinforcement Learning


    •  Farahmand, A.-M., Barreto, A.M.S., Nikovski, D.N., "Value-Aware Loss Function for Model-based Reinforcement Learning", Artificial Intelligence and Statistics (AISTATS), Vol. 54, April 2017.
      BibTeX TR2017-049 PDF
      • @article{Farahmand2017apr,
      • author = {Farahmand, Amir-massoud and Barreto, Andre M.S. and Nikovski, Daniel N.},
      • title = {Value-Aware Loss Function for Model-based Reinforcement Learning},
      • journal = {Artificial Intelligence and Statistics (AISTATS)},
      • year = 2017,
      • volume = 54,
      • month = apr,
      • url = {https://www.merl.com/publications/TR2017-049}
      • }
  • MERL Contact:
  • Research Areas:

    Artificial Intelligence, Data Analytics, Optimization

Abstract:

We consider the problem of estimating the transition probability kernel to be used by a model-based reinforcement learning (RL) algorithm. We argue that estimating a generative model that minimizes a probabilistic loss, such as the log-loss, is an overkill because it does not take into account the underlying structure of decision problem and the RL algorithm that intends to solve it. We introduce a loss function that takes the structure of the value function into account. We provide a finite-sample upper bound for the loss function showing the dependence of the error on model approximation error, number of samples, and the complexity of the model space. We also empirically compare the method with the maximum likelihood estimator on a simple problem.