TR2018-187

Data-Driven Estimation of Reachable and Invariant Sets for Unmodeled Systems via Active Learning


Abstract:

Ensuring control performance with state and input constraints is facilitated by the understanding of reachable and invariant sets. While exploiting dynamical models have provided many set-based algorithms for constructing these sets, set-based methods typically do not scale well, or rely heavily on model accuracy or structure. In contrast, it is relatively simple to generate state trajectories in a data-driven manner by numerically simulating complex systems from initial conditions sampled from within an admissible state space, even if the underlying dynamics are completely unknown. These samples can then be leveraged for reachable/invariant set estimation via machine learning, although the learning performance is strongly linked to the sampling pattern. In this paper, active learning is employed to intelligently select batches of samples that are most informative and least redundant to previously labeled samples via submodular maximization. Selective sampling reduces the number of numerical simulations required for constructing the invariant set estimator, thereby enhancing scalability to higherdimensional state spaces. The potential of the proposed framework is illustrated via a numerical example.

 

  • Related News & Events

    •  NEWS    Ankush Chakrabarty gave an invited talk on machine learning for constrained control at AI for Engineering in Toronto
      Date: August 19, 2019 - August 23, 2019
      Where: AI for Engineering Summer School 2019
      MERL Contact: Ankush Chakrabarty
      Research Areas: Artificial Intelligence, Control, Dynamical Systems, Machine Learning
      Brief
      • Ankush Chakrabarty, a Visiting Research Scientist in MERL's Control and Dynamical Systems group, gave an invited talk at the AI for Engineering Summer School 2019 hosted by Autodesk. The talk briefly described MERL's research areas, and focused on Dr. Chakrabarty's work at MERL (with collaborators from the CD and DA group) on the use of supervised learning for verification of control systems with simulators/neural nets in the loop, and on constraint-enforcing reinforcement learning. Other speakers at the event included researchers from various academic and industrial research facilities including U Toronto, UW-Seattle, Carnegie Mellon U, the Vector Institute, and the Montreal Institute for Learning Algorithms.
    •