TR2016-126

Moving object segmentation using depth and optical flow in car driving sequences


    •  Kao, J.-Y., Tian, D., Mansour, H., Vetro, A., Ortega, A., "Moving object segmentation using depth and optical flow in car driving sequences", IEEE International Conference on Image Processing (ICIP), DOI: 10.1109/​ICIP.2016.7532309, August 2016, pp. 11-15.
      BibTeX TR2016-126 PDF
      • @inproceedings{Kao2016aug,
      • author = {Kao, Jiun-Yu and Tian, Dong and Mansour, Hassan and Vetro, Anthony and Ortega, Antonio},
      • title = {Moving object segmentation using depth and optical flow in car driving sequences},
      • booktitle = {IEEE International Conference on Image Processing (ICIP)},
      • year = 2016,
      • pages = {11--15},
      • month = aug,
      • doi = {10.1109/ICIP.2016.7532309},
      • issn = {2381-8549},
      • isbn = {978-1-4673-9961-6},
      • url = {https://www.merl.com/publications/TR2016-126}
      • }
  • MERL Contacts:
  • Research Area:

    Digital Video

Abstract:

Segmentation of moving objects in a scene is difficult for non-stationary cameras, and especially challenging in the presence of fast and unstable egomotion, e.g., as encountered with car-mounted cameras or wearable devices. Based on an analysis of motion vanishing points of the scene and estimated depth, a geometric model that relates extracted 2D motion to a 3D motion field relative to the camera is derived. Observing that the 3D motion field is piece-wise smooth, a constrained optimization problem that considers group sparsity is formulated to recover the 3D motion field from the 2D motion. The recovered 3D motion field is then clustered to provide the segmentation of moving objects. Experiments are performed using the KITTI Vision Benchmark Suite and demonstrate that the proposed framework provides a dense segmentation of moving objects that is robust to the challenging conditions inherent with car driving sequences.