TR2020-100

Finite-Time Convergence in Continuous-Time Optimization


    •  Romero, O., Benosman, M., "Finite-Time Convergence in Continuous-Time Optimization", International Conference on Machine Learning (ICML), July 2020.
      BibTeX TR2020-100 PDF
      • @inproceedings{Romero2020jul,
      • author = {Romero, Orlando and Benosman, Mouhacine},
      • title = {Finite-Time Convergence in Continuous-Time Optimization},
      • booktitle = {International Conference on Machine Learning},
      • year = 2020,
      • month = jul,
      • url = {https://www.merl.com/publications/TR2020-100}
      • }
  • MERL Contact:
  • Research Area:

    Optimization

Abstract:

In this paper, we investigate a Lyapunov-like differential inequality that allows us to establish finite-time stability of a continuous-time state-space dynamical system represented via a multivariate ordinary differential equation or differential inclusion. Equipped with this condition, we synthesize first and second-order (in an optimization variable) dynamical systems that achieve finite-time convergence to the minima of a given sufficiently regular cost function. As a byproduct, we show that the q-rescaled gradient flow (q-RGF) proposed by Wibisono et al. (2016) is indeed finite-time convergent, provided the cost function is gradient dominated of order p E (1, q). This way, we effectively bridge a gap between the q-RGF and the finite-time convergent normalized gradient flow (NGF) (q = infinity) proposed by Cortes' (2006) in his seminal paper in the context of multiagent systems. We discuss strategies to discretize our proposed flows and conclude by conducting some numerical experiments to illustrate our results.

 

  • Related News & Events