TR2015-073

Layered Interpretation of Street View Images


Abstract:

We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.

 

  • Related News & Events

    •  NEWS    Scene interpretation results of SA group members are listed as the leader of benchmark competition
      Date: July 13, 2015 - July 17, 2015
      Research Area: Machine Learning
      Brief
      • SA group members (M. Liu, S. Lin (intern), S. Ramalingam, O. Tuzel) presented a paper at the Robotics Science and Systems Conference in Rome July 13-17 called 'Layered Interpretation of Street View Images'. The results they reported are now listed as the leader of the benchmark competition sponsored by Daimler. [Note that at that URL ref 2 is from collaboration with Daimler and it uses a FPGA for high speed, whereas MERL result is obtained with desktop computer and GPU.].
    •