Optimal Transport over Deterministic Discrete-time Nonlinear Systems using Stochastic Feedback Laws

This paper considers the relaxed version of the transport problem for general nonlinear control systems, where the objective is to design time-varying feedback laws that transport a given initial probability measure to a target probability measure under the action of the closed-loop system. To make the problem analytically tractable, we consider control laws that are stochastic, i.e., the control laws are maps from the state space of the control system to the space of probability measures on the set of admissible control inputs. Under some controllability assumptions on the control system as defined on the state space, we show that the transport problem, considered as a controllability problem for the lifted control system on the space of probability measures, is well-posed for a large class of initial and target measures. We use this to prove the wellposedness of a fixed-endpoint optimal control problem defined on the space of probability measures, where along with the terminal constraints, the goal is to optimize an objective functional along the trajectory of the control system. This optimization problem can be posed as an infinite-dimensional linear programming problem. This formulation facilitates numerical solutions of the transport problem for low-dimensional control systems, as we show in two numerical examples.