Moving object segmentation using depth and optical flow in car driving sequences

Segmentation of moving objects in a scene is difficult for non-stationary cameras, and especially challenging in the presence of fast and unstable egomotion, e.g., as encountered with car-mounted cameras or wearable devices. Based on an analysis of motion vanishing points of the scene and estimated depth, a geometric model that relates extracted 2D motion to a 3D motion field relative to the camera is derived. Observing that the 3D motion field is piece-wise smooth, a constrained optimization problem that considers group sparsity is formulated to recover the 3D motion field from the 2D motion. The recovered 3D motion field is then clustered to provide the segmentation of moving objects. Experiments are performed using the KITTI Vision Benchmark Suite and demonstrate that the proposed framework provides a dense segmentation of moving objects that is robust to the challenging conditions inherent with car driving sequences.