TR2018-068

BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning


Abstract:

Understanding the global optimality in deep learning (DL) has been attracting more and more attention recently. Conventional DL solvers, however, have not been developed intentionally to seek for such global optimality. In this paper we propose a novel approximation algorithm, BPGrad, towards optimizing deep models globally via branch and pruning. Our BPGrad algorithm is based on the assumption of Lipschitz continuity in DL, and as a result it can adaptively determine the step size for current gradient given the history of previous updates, wherein theoretically no smaller steps can achieve the global optimality. We prove that, by repeating such branch-and-pruning procedure, we can locate the global optimality within finite iterations. Empirically an efficient solver based on BPGrad for DL is proposed as well, and it outperforms conventional DL solvers such as Adagrad, Adadelta, RMSProp, and Adam in the tasks of object recognition, detection, and segmentation.

 

  • Related Publication

  •  Zhang, Z., Wu, Y., Wang, G., "BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning", arXiv, November 2017.
    BibTeX arXiv
    • @article{Zhang2017nov,
    • author = {Zhang, Ziming and Wu, Yuanwei and Wang, Guanghui},
    • title = {BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning},
    • journal = {arXiv},
    • year = 2017,
    • month = nov,
    • url = {https://arxiv.org/abs/1711.06959}
    • }