TR2022-042

Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems


    •  Mowlavi, S., Benosman, M., Nabi, S., "Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems", International Conference on Learning Representations (ICLR) Workshop, April 2022.
      BibTeX TR2022-042 PDF
      • @inproceedings{Mowlavi2022apr,
      • author = {Mowlavi, Saviz and Benosman, Mouhacine and Nabi, Saleh},
      • title = {Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems},
      • booktitle = {International Conference on Learning Representations (ICLR) Workshop},
      • year = 2022,
      • month = apr,
      • url = {https://www.merl.com/publications/TR2022-042}
      • }
  • MERL Contacts:
  • Research Areas:

    Dynamical Systems, Machine Learning, Optimization

Abstract:

High-dimensional nonlinear systems such as atmospheric or oceanic flows present a computational challenge for data assimilation (DA) algorithms such as Kalman filters. A potential solution is to rely on a reduced-order model (ROM) of the dynamics. However, ROMs are prone to large errors, which negatively affects the accuracy of the resulting forecast. Here, we introduce the reinforcement learning reduced-order estimator (RL-ROE), a ROM-based data assimilation algorithm in which the correction term that takes in the measurement data is given by a nonlinear stochastic policy trained through reinforcement learning. The flexibility of the nonlinear policy enables the RL-ROE to compensate for errors of the ROM, while still taking advantage of the imperfect knowledge of the dynamics. We show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM, and displays robust estimation performance with respect to different reference trajectories and initial state estimates.