Monocular Visual Odometry and Dense 3D Reconstruction for On-Road Vehicles

    •  Zhu, M.; Ramalingam, S.; Taguchi, Y.; Garaas, T., "Monocular Visual Odometry and Dense 3D Reconstruction for On-Road Vehicles", European Conference on Computer Vision (ECCV), October 2012, vol. 7584, pp. 596-606.
      BibTeX Download PDF
      • @inproceedings{Zhu2012oct,
      • author = {Zhu, M. and Ramalingam, S. and Taguchi, Y. and Garaas, T.},
      • title = {Monocular Visual Odometry and Dense 3D Reconstruction for On-Road Vehicles},
      • booktitle = {European Conference on Computer Vision (ECCV)},
      • year = 2012,
      • volume = 7584,
      • pages = {596--606},
      • month = oct,
      • url = {}
      • }
  • MERL Contact:
  • Research Areas:

    Computer Vision, Robotics

More and more on-road vehicles are equipped with cameras each day. This paper presents a novel method for estimating the relative motion of a vehicle from a sequence of images obtained using a single vehicle-mounted camera. Recently, several researchers in robotics and computer vision have studied the performance of motion estimation algorithms under non- holonomic constraints and planarity. The successful algorithms typically use the smallest number of feature correspondences with respect to the motion model. It has been strongly established that such minimal algorithms are efficient and robust to outliers when used in a hypothesize-and-test framework such as random sample consensus (RANSAC). In this paper, we show that the planar 2-point motion estimation can be solved analytically using a single quadratic equation, without the need of iterative techniques such as Newton-Raphson method used in existing work. Non-iterative methods are more efficient and do not suffer from local minima problems. Although 2-point motion estimation generates visually accurate on-road vehicle-trajectory, the motion is not precise enough to perform dense 3D reconstruction due to the nonplanarity of roads. Thus we use a 2-point relative motion algorithm for the initial images followed by 3-point 2D-to-3D camera pose estimation for the subsequent images. Using this hybrid approach, we generate accurate motion estimates for a plane-sweeping algorithm that produces dense depth maps for obstacle detection applications.