Inverse Learning for Human-Adaptive Motion Planning

This paper presents a method for inverse learning of a control objective defined in terms of requirements and their probability distribution. The probability distribution characterizes tolerated deviations from the deterministic requirements, is modeled as Gaussian, and learned from data using likelihood maximization. Further, this paper introduces both parametrized requirements for motion planning in autonomous driving applications and methods for their estimation from demonstrations. Human-in-the-loop simulations with four drivers suggest that human motion planning can be modeled with the considered probabilistic control objective and the inverse learning methods in this paper enable more natural and personalized automated driving.