TR2019-129

Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning


    •  Ota, K., Jha, D.K., Oiki, T., Miura, M., Nammoto, T., Nikovski, D., Mariyama, T., "Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), DOI: 10.1109/​IROS40897.2019.8968010, November 2019, pp. 3487-3494.
      BibTeX TR2019-129 PDF
      • @inproceedings{Ota2019nov,
      • author = {Ota, Kei and Jha, Devesh K. and Oiki, Tomohiro and Miura, Mamoru and Nammoto, Takashi and Nikovski, Daniel and Mariyama, Toshisada},
      • title = {Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning},
      • booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
      • year = 2019,
      • pages = {3487--3494},
      • month = nov,
      • publisher = {IEEE},
      • doi = {10.1109/IROS40897.2019.8968010},
      • issn = {2153-0866},
      • isbn = {978-1-7281-4004-9},
      • url = {https://www.merl.com/publications/TR2019-129}
      • }
  • MERL Contacts:
  • Research Area:

    Robotics

Abstract:

In this paper, we propose a reinforcement learning-based algorithm for trajectory optimization for constrained dynamical systems. This problem is motivated by the fact that for most robotic systems, the dynamics may not always be known. Generating smooth, dynamically feasible trajectories could be difficult for such systems. Using samplingbased algorithms for motion planning may result in trajectories that are prone to undesirable control jumps. However, they can usually provide a good reference trajectory which a model-free reinforcement learning algorithm can then exploit by limiting the search domain and quickly finding a dynamically smooth trajectory. We use this idea to train a reinforcement learning agent to learn a dynamically smooth trajectory in a curriculum learning setting. Furthermore, for generalization, we parameterize the policies with goal locations, so that the agent can be trained for multiple goals simultaneously. We show result in both simulated environments as well as real experiments, for a 6-DoF manipulator arm operated in position-controlled mode to validate the proposed idea. We compare the proposed ideas against a PID controller which is used to track a designed trajectory in configuration space. Our experiments show that our RL agent trained with a reference path outperformed a model-free PID controller of the type commonly used on many robotic platforms for trajectory tracking.