A Framework for Training Larger Networks for Deep Reinforcement Learning

    •  Ota, K., Jha, D.K., Kanezaki, A., "A Framework for Training Larger Networks for Deep Reinforcement Learning", Machine Learning Journal, May 2024.
      BibTeX TR2024-058 PDF
      • @article{Ota2024may3,
      • author = {Ota, Kei and Jha, Devesh K. and Kanezaki, Asako}},
      • title = {A Framework for Training Larger Networks for Deep Reinforcement Learning},
      • journal = {Machine Learning Journal},
      • year = 2024,
      • month = may,
      • url = {}
      • }
  • MERL Contact:
  • Research Area:

    Machine Learning


The success of deep learning in computer vision and natural language processing communities can be attributed to the training of very deep neural networks with millions or billions of parameters, which can then be trained with massive amounts of data. However, a similar trend has largely eluded the training of deep reinforcement learning (RL) algorithms where larger networks do not lead to performance improvement. Previous work has shown that this is mostly due to instability during the training of deep RL agents when using larger networks. In this paper, we make an attempt to understand and address the training of larger networks for deep RL. We first show that naively increasing network capacity does not improve performance. Then, we propose a novel method that consists of 1) wider networks with DenseNet connection, 2) decoupling representation learning from the training of RL, and 3) a distributed training method to mitigate overfitting problems. Using this three-fold technique, we show that we can train very large networks that result in significant performance gains. We present several ablation studies to demonstrate the efficacy of the pro- posed method and some intuitive understanding of the reasons for performance gain. We show that our proposed method outperforms other baseline algorithms on several challenging locomotion tasks.