TR2019-024

Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences


    •  Lee, T.-Y., Wittenburg, K.B., "Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences", IEEE Pacific Visualization Symposium (PacificVis), DOI: 10.1109/​PacificVis.2019.00045, June 2019.
      BibTeX TR2019-024 PDF
      • @inproceedings{Lee2019jun,
      • author = {Lee, Teng-Yok and Wittenburg, Kent B.},
      • title = {Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences},
      • booktitle = {IEEE Pacific Visualization Symposium (PacificVis)},
      • year = 2019,
      • month = jun,
      • doi = {10.1109/PacificVis.2019.00045},
      • url = {https://www.merl.com/publications/TR2019-024}
      • }
  • Research Areas:

    Computer Vision, Data Analytics

Abstract:

Development of object detectors for video in applications such as autonomous driving requires an iterative training process with data that initially requires human labeling. Later stages of development require tuning a large set of parameters that may not have labeled data available. For each training iteration and parameter selection decision, insight is needed into object detector performance. This work presents a visualization method called Space-Time Slicing to assist a human developer in the development of object detectors for driving applications without requiring labeled data. Space-Time Slicing reveals patterns in the detection data that can suggest the presence of false positives and false negatives. It may be used to setsuch parameters as image pixel size in data preprocessing and confidence thresholds for object classifiers by comparing performance across different conditions.