- MERL Seminar Series.)
(Learn more about the
Date & Time:
Tuesday, April 12, 2022; 11:00 AM EDT
Reinforcement Learning (RL), similarly to many AI-based techniques, is currently receiving a very high attention. RL is most commonly supported by classic Machine Learning techniques, i.e. typically Deep Neural Networks (DNNs). While there are good motivations for using DNNs in RL, there are also significant drawbacks. The lack of “explainability” of the resulting control policies, and the difficulty to provide guarantees on their closed-loop behavior (safety, stability) makes DNN-based policies problematic in many applications. In this talk, we will discuss an alternative approach to support RL, via formal optimal control tools based on Model Predictive Control (MPC). This approach alleviates the issues detailed above, but also presents some challenges. In this talk, we will discuss why MPC is a valid tool to support RL, and how MPC can be combined with RL (RLMPC). We will then discuss some recent results regarding this combination, the known challenges, and the kind of control applications where we believe that RLMPC will be a valuable approach.
Sebastien Gros has obtained his PhD degree from EPFL, Switzerland, and has been a postdoc in the Optimal Control group at KU Leuven, Belgium. He has been assistant and associate Prof. at Chalmers, Sweden, where he had strong collaborations with the Volvos companies on autonomous driving and smart traffic. Since 2019, he is Prof. in Optimal Control at NTNU, the largest university in Norway. He is now Head of the Dept. of Eng. Cybernetics, hosting world-class activities on mobile robotics and autonomous vehicles. Prof. Gros focuses his research on the theory of optimal control, and on its application to energy-related problems such as smart buildings and efficient mobility.