TR2021-009

Robust Constrained-MDPs: Soft-Constrained Robust Policy Optimization under Model Uncertainty


    •  Russel, R.H., Benosman, M., van Baar, J., "Robust Constrained-MDPs: Soft-Constrained Robust Policy Optimization under Model Uncertainty", Advances in Neural Information Processing Systems (NeurIPS)-workshop, January 2021.
      BibTeX TR2021-009 PDF
      • @inproceedings{Russel2021jan,
      • author = {Russel, Reazul Hasan and Benosman, Mouhacine and van Baar, Jeroen},
      • title = {Robust Constrained-MDPs: Soft-Constrained Robust Policy Optimization under Model Uncertainty},
      • booktitle = {Advances in Neural Information Processing Systems (NeurIPS)-workshop},
      • year = 2021,
      • month = jan,
      • url = {https://www.merl.com/publications/TR2021-009}
      • }
  • MERL Contact:
  • Research Areas:

    Control, Optimization

Abstract:

In this paper, we focus on the problem of robustifying reinforcement learning (RL) algorithms with respect to model uncertainties. Indeed, in the framework of model-based RL, we propose to merge the theory of constrained Markov decision process (CMDP), with the theory of robust Markov decision process (RMDP), leading to a formulation of robust constrained-MDPs (RCMDP). This formulation, simple in essence, allows us to design RL algorithms that are robust in performance, and provides constraint satisfaction guarantees, with respect to uncertainties in the system’s states transition probabilities. The need for RCMPDs is important for real-life applications of RL. For instance, such formulation can play an important role for policy transfer from simulation to real world (Sim2Real) in safety critical applications, which would benefit from performance and safety guarantees which are robust w.r.t model uncertainty. We first propose the general problem formulation under the concept of RCMDP, and then propose a Lagrangian formulation of the optimal problem, leading to a robust-constrained policy gradient RL algorithm. We finally validate this concept on the inventory management problem.

 

  • Related Publication

  •  Russel, R.H., Benosman, M., van Baar, J., "Robust Constrained-MDPs: Soft-Constrained Robust Policy Optimization under Model Uncertainty", arXiv, October 2020.
    BibTeX arXiv
    • @article{Russel2020oct,
    • author = {Russel, Reazul Hasan and Benosman, Mouhacine and van Baar, Jeroen},
    • title = {Robust Constrained-MDPs: Soft-Constrained Robust Policy Optimization under Model Uncertainty},
    • journal = {arXiv},
    • year = 2020,
    • month = oct,
    • url = {https://arxiv.org/abs/2010.04870}
    • }