TR2018-144

Simulation to Real Transfer Learning with Robustified Policies for Robot Tasks


    •  van Baar, J., Corcodel, R., Sullivan, A., Jha, D.K., Romeres, D., Nikovski, D.N., "Simulation to Real Transfer Learning with Robustified Policies for Robot Tasks", arXiv, September 2018.
      BibTeX arXiv
      • @article{vanBaar2018sep,
      • author = {van Baar, Jeroen and Corcodel, Radu and Sullivan, Alan and Jha, Devesh K. and Romeres, Diego and Nikovski, Daniel N.},
      • title = {Simulation to Real Transfer Learning with Robustified Policies for Robot Tasks},
      • journal = {arXiv},
      • year = 2018,
      • month = sep,
      • url = {https://arxiv.org/abs/1809.04720}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

Learning tasks from simulated data using reinforcement learning has been proven effective. A major advantage of using simulation data for training is that it reduces the burden of acquiring real data. Specifically when robots are involved, it is important to limit the amount of time a robot is occupied with learning, and can instead be used for its intended (manufacturing) task. A policy learned on simulation data can be transferred and refined for real data. In this paper we propose to learn a robustified policy during reinforcement learning using simulation data. A robustified policy is learned by exploiting the ability to change the simulation parameters (appearance and dynamics) for successive training episodes. We demonstrate that the amount of transfer learning for a robustified policy is reduced for transfer from a simulated to real task. We focus on tasks which involve reasl-time non-linear dynamics, since non-linear dynamics can only be approximately modeled in physics engines, and the need for robustness in learned policies becomes more evident.