TR2025-105

Policy Optimization for PDE Control with a Warm Start


    •  Zhang, X., Mowlavi, S., Benosman, M., Basar, T., "Policy Optimization for PDE Control with a Warm Start", American Control Conference (ACC), July 2025.
      BibTeX TR2025-105 PDF
      • @inproceedings{Zhang2025jul,
      • author = {Zhang, Xiangyuan and Mowlavi, Saviz and Benosman, Mouhacine and Basar, Tamer},
      • title = {{Policy Optimization for PDE Control with a Warm Start}},
      • booktitle = {American Control Conference (ACC)},
      • year = 2025,
      • month = jul,
      • url = {https://www.merl.com/publications/TR2025-105}
      • }
  • MERL Contact:
  • Research Areas:

    Computational Sensing, Dynamical Systems, Machine Learning, Signal Processing

Abstract:

Dimensionality reduction is crucial for control- ling nonlinear partial differential equations (PDE) through a “reduce-then-design” strategy, which identifies a reduced-order model and then implements model-based control solutions. However, inaccuracies in the reduced-order modeling can substantially degrade controller performance, especially in PDEs with chaotic behavior. To address this issue, we augment the reduce-then-design procedure with a policy optimization (PO) step. The PO step fine-tunes the model-based controller to compensate for the modeling error from dimensionality reduction. This augmentation shifts the overall strategy into reduce-then- design-then-adapt, where the model-based controller serves as a warm start for PO. Specifically, we study the state-feedback tracking control of PDEs that aims to align the PDE state with a specific constant target subject to a linear-quadratic cost. Through extensive experiments, we show that a few iterations of PO can significantly improve the model-based controller performance. Our approach offers a cost-effective alternative to PDE control using end-to-end reinforcement learning.