Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences

Development of object detectors for video in applications such as autonomous driving requires an iterative training process with data that initially requires human labeling. Later stages of development require tuning a large set of parameters that may not have labeled data available. For each training iteration and parameter selection decision, insight is needed into object detector performance. This work presents a visualization method called Space-Time Slicing to assist a human developer in the development of object detectors for driving applications without requiring labeled data. Space-Time Slicing reveals patterns in the detection data that can suggest the presence of false positives and false negatives. It may be used to setsuch parameters as image pixel size in data preprocessing and confidence thresholds for object classifiers by comparing performance across different conditions.