Trajectory-based Learning for Ball-in-Maze Games

    •  Paul, S., van Baar, J., "Trajectory-based Learning for Ball-in-Maze Games", NIPS Workshop on Imitation Learning and its Challenges in Robotics, December 2018.
      BibTeX TR2018-158 PDF
      • @inproceedings{Paul2018dec,
      • author = {Paul, Sujoy and van Baar, Jeroen},
      • title = {Trajectory-based Learning for Ball-in-Maze Games},
      • booktitle = {NIPS Workshop on Imitation Learning and its Challenges in Robotics},
      • year = 2018,
      • month = dec,
      • url = {}
      • }
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning, Robotics


Deep Reinforcement Learning has shown tremendous success in solving several games and tasks in robotics. However, unlike humans, it generally requires a lot of training instances. Trajectories imitating to solve the task at hand can help to increase sample-efficiency of deep RL methods. In this paper, we present a simple approach to use such trajectories, applied to the challenging Ball-in-Maze Games, recently introduced in the literature. We show that in spite of not using human-generated trajectories and just using the simulator as a model to generate a limited number of trajectories, we can get a speed-up of about 2-3x in the learning process. We also discuss some challenges we observed while using trajectory-based learning for very sparse reward functions.