TR2016-145

Learning to Control Partial Differential Equations: Regularized Fitted Q-Iteration Approach


    •  Farahmand, A.-M., Nabi, S., Grover, P., Nikovski, D.N., "Learning to Control Partial Differential Equations: Regularized Fitted Q-Iteration Approach", IEEE Conference on Decision and Control (CDC), DOI: 10.1109/​CDC.2016.7798966, December 2016, pp. 4578-4585.
      BibTeX TR2016-145 PDF
      • @inproceedings{Farahmand2016dec,
      • author = {Farahmand, Amir-massoud and Nabi, Saleh and Grover, Piyush and Nikovski, Daniel N.},
      • title = {Learning to Control Partial Differential Equations: Regularized Fitted Q-Iteration Approach},
      • booktitle = {IEEE Conference on Decision and Control (CDC)},
      • year = 2016,
      • pages = {4578--4585},
      • month = dec,
      • doi = {10.1109/CDC.2016.7798966},
      • url = {https://www.merl.com/publications/TR2016-145}
      • }
  • MERL Contact:
  • Research Areas:

    Artificial Intelligence, Data Analytics, Optimization

Abstract:

This paper formulates a class of partial differential equation (PDE) control problems as a reinforcement learning (RL) problem. We design an RL-based algorithm that directly works with the state of PDE, an infinite dimensional vector, thus allowing us to avoid the model order reduction, commonly used in the conventional PDE controller design approaches. We apply the method to the problem of flow control for time-varying 2D convection-diffusion PDE, as a simplified model for heating, ventilating, air conditioning (HVAC) control design in a room.