TR2020-063

Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements


    •  Romeres, D., Dalla Libera, A., Jha, D., Yerazunis, W.S., Nikovski, D.N., "Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements", Robotics and Automation Letters, DOI: 10.1109/LRA.2020.2977255, Vol. 5, No. 2, pp. 3548-3555, May 2020.
      BibTeX TR2020-063 PDF
      • @article{Romeres2020may,
      • author = {Romeres, Diego and Dalla Libera, Alberto and Jha, Devesh and Yerazunis, William S. and Nikovski, Daniel N.},
      • title = {Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements},
      • journal = {Robotics and Automation Letters},
      • year = 2020,
      • volume = 5,
      • number = 2,
      • pages = {3548--3555},
      • month = may,
      • doi = {10.1109/LRA.2020.2977255},
      • issn = {2377-3766},
      • url = {https://www.merl.com/publications/TR2020-063}
      • }
  • MERL Contacts:
  • Research Area:

    Robotics

In this paper, we propose a derivative-free model learning framework for Reinforcement Learning (RL) algorithms based on Gaussian Process Regression (GPR). In many mechanical systems, only positions can be measured by the sensing instruments. Then, instead of representing the system state as suggested by the physics with a collection of positions, velocities, and accelerations, we define the state as the set of past position measurements. However, the equation of motions derived by physical first principles cannot be directly applied in this framework, being functions of velocities and accelerations. For this reason, we introduce a novel derivative-free physically-inspired kernel, which can be easily combined with nonparametric derivative-free Gaussian Process models. Tests performed on two real platforms show that the considered state definition combined with the proposed model improves estimation performance and data-efficiency w.r.t. traditional models based on GPR. Finally, we validate the proposed framework by solving two RL control problems for two real robotic systems.

 

  • Related News & Events