TR2015-001

Estimating Drivable Collision-Free Space from Monocular Video


    •  Yao, J., Ramalingam, S., Taguchi, Y., Miki, Y., Urtasun, R., "Estimating Drivable Collision-Free Space from Monocular Video", IEEE Winter Conference on Applications of Computer Vision (WACV), DOI: 10.1109/​WACV.2015.62, January 2015, pp. 420-427.
      BibTeX TR2015-001 PDF
      • @inproceedings{Yao2015jan,
      • author = {Yao, J. and Ramalingam, S. and Taguchi, Y. and Miki, Y. and Urtasun, R.},
      • title = {Estimating Drivable Collision-Free Space from Monocular Video},
      • booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      • year = 2015,
      • pages = {420--427},
      • month = jan,
      • publisher = {IEEE},
      • doi = {10.1109/WACV.2015.62},
      • url = {https://www.merl.com/publications/TR2015-001}
      • }
  • Research Area:

    Computer Vision

Abstract:

In this paper we propose a novel algorithm for estimating the drivable collision-free space for autonomous navigation of on-road and on-water vehicles. In contrast to previous approaches that use stereo cameras or LIDAR, we show a method to solve this problem using a single camera. Inspired by the success of many vision algorithms that employ dynamic programming for efficient inference, we reduce the free space estimation task to an inference problem on a 1D graph, where each node represents a column in the image and its label denotes a position that separates the free space from the obstacles. Our algorithm exploits several image and geometric features based on edges, color, and homography to define potential functions on the 1D graph, whose parameters are learned through structured SVM. We show promising results on the challenging KITTI dataset as well as video collected from boats.