TR2020-076

Reinforcement Learning-based Model Reduction for Partial Differential Equations


    •  Benosman, M., Chakrabarty, A., Borggaard, J., "Reinforcement Learning-based Model Reduction for Partial Differential Equations", World Congress of the International Federation of Automatic Control (IFAC), Rolf Findeisen and Sandra Hirche and Klaus Janschek and Martin Mönnigmann, Eds., DOI: 10.1016/​j.ifacol.2020.12.1515, June 2020, pp. 7704-7709.
      BibTeX TR2020-076 PDF
      • @inproceedings{Benosman2020jun,
      • author = {Benosman, Mouhacine and Chakrabarty, Ankush and Borggaard, Jeff},
      • title = {Reinforcement Learning-based Model Reduction for Partial Differential Equations},
      • booktitle = {World Congress of the International Federation of Automatic Control (IFAC)},
      • year = 2020,
      • editor = {Rolf Findeisen and Sandra Hirche and Klaus Janschek and Martin Mönnigmann},
      • pages = {7704--7709},
      • month = jun,
      • publisher = {Elsevier},
      • doi = {10.1016/j.ifacol.2020.12.1515},
      • url = {https://www.merl.com/publications/TR2020-076}
      • }
  • MERL Contacts:
  • Research Area:

    Optimization

Abstract:

This paper is dedicated to the problem of stable model reduction for partial differential equations (PDEs). We propose to use proper orthogonal decomposition (POD) method to project the PDE model into a lower dimensional given by an ordinary differential equation (ODE) model. We then stabilize this model, following the closure model approach, by proposing to use reinforcement learning (RL) to learn an optimal closure model term. We analyze the stability of the proposed RL closure model and show its performance on the coupled Burgers equation.